CN106372059A - Information input method and information input device - Google Patents

Information input method and information input device Download PDF

Info

Publication number
CN106372059A
CN106372059A CN201610770021.0A CN201610770021A CN106372059A CN 106372059 A CN106372059 A CN 106372059A CN 201610770021 A CN201610770021 A CN 201610770021A CN 106372059 A CN106372059 A CN 106372059A
Authority
CN
China
Prior art keywords
input
voice messaging
expression picture
user
expression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610770021.0A
Other languages
Chinese (zh)
Other versions
CN106372059B (en
Inventor
秦添
赵晓蕾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201610770021.0A priority Critical patent/CN106372059B/en
Publication of CN106372059A publication Critical patent/CN106372059A/en
Priority to EP17155430.6A priority patent/EP3291224A1/en
Priority to KR1020170018938A priority patent/KR101909807B1/en
Priority to US15/429,353 priority patent/US10210865B2/en
Priority to JP2017023271A priority patent/JP6718828B2/en
Application granted granted Critical
Publication of CN106372059B publication Critical patent/CN106372059B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/632Query formulation
    • G06F16/634Query by example, e.g. query by humming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0236Character input methods using selection techniques to select from displayed items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0237Character input methods using prediction or retrieval techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/274Converting codes to words; Guess-ahead of partial word inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/1815Semantic context, e.g. disambiguation of the recognition hypotheses based on word meaning
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/30Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2201/00Electronic components, circuits, software, systems or apparatus used in telephone systems
    • H04M2201/34Microprocessors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2201/00Electronic components, circuits, software, systems or apparatus used in telephone systems
    • H04M2201/40Electronic components, circuits, software, systems or apparatus used in telephone systems using speech recognition

Abstract

The invention discloses an information input method and an information input device. A specific implementation way of the method comprises the following steps: receiving a voice message input by a user, wherein the voice message is associated with the content which is input into an input area of an application; taking the emoticon pictures associated with the voice message as candidate results, wherein the emoticon pictures include multiple emoticon pictures of which the times being input by the user into the input area of the application among the history input of the voice message associated with the voice message semantics is greater than the times threshold; and inputting the emoticon picture selected by the user from the candidate results into the input area of the application. The semantics of voice input of the user can be accurately understood during the voice input of the user, matching emoticon pictures are intelligently recommended according to the talk content and emotion, the user is helped to quickly input the emoticon picture, and the complicated operation for finding the emoticon picture by the user is shortened, so that convenience is provided for the user.

Description

Data inputting method and device
Technical field
The application is related to computer realm and in particular to input method field, more particularly, to data inputting method and device.
Background technology
At present, some input methods provide the function of phonetic entry.User carried out using input method speech voice input function defeated Fashionable, generally using by the way of be: the voice of input is converted into after sentence and is inputted.
However, when employing mode carries out phonetic entry it is impossible to meet such as user input when in different situations Need to input the demand of different types of expression picture, speech voice input function is more single.
Content of the invention
This application provides data inputting method and device, ask for solving the technology that above-mentioned background section exists Topic.
In a first aspect, this application provides data inputting method, the method includes: the voice messaging of receiving user's input, Voice messaging is associated with the content of the input area to be input to application;Using the expression picture being associated with voice messaging as Candidate result, expression picture includes: the history input of the voice messaging that multiple users are associated with voice messaging semanteme in input In be input to application input area number of times be more than frequency threshold value expression picture;User is selected from candidate result Expression picture is input in the input area of application.
Second aspect, this application provides message input device, this device includes: receiving unit, is configured to receive and uses The voice messaging of family input, voice messaging is associated with the content of the input area to be input to application;Choose unit, configuration is used In the expression picture being associated with voice messaging as candidate result, expression picture includes: multiple users are in input and voice The number of times of the input area being input to application in the history input of the voice messaging that information semantic is associated is more than frequency threshold value Expression picture;Input block, the expression picture being configured to select user from candidate result is input to the input of application In region.
Data inputting method and device that the application provides, by the voice messaging of receiving user's input, voice messaging with Content to be input to the input area of application is associated;Using the expression picture being associated with voice messaging as candidate result, Expression picture includes: multiple users are input in the history input of the voice messaging that input is associated with voice messaging semanteme should The number of times of input area is more than the expression picture of frequency threshold value;The expression picture that user is selected from candidate result is defeated Enter in the input area of application.Achieve when user passes through phonetic entry, can precisely understand the language of user speech input Justice, according to the content spoken, emotion, the expression picture of intelligent recommendation coupling, helps user to carry out the defeated of quick expression picture Enter, shorten the troublesome operation that user searches expression picture, provide the user facility.
Brief description
By reading the detailed description that non-limiting example is made made with reference to the following drawings, other of the application Feature, objects and advantages will become more apparent upon:
Fig. 1 is the exemplary system architecture figure of the embodiment that can apply to the data inputting method of the application or device;
The flow chart that Fig. 2 shows an embodiment of the data inputting method according to the application;
The flow chart that Fig. 3 shows another embodiment of the data inputting method according to the application;
Fig. 4 shows a structural representation of the message input device according to the application;
Fig. 5 is adapted for the structural representation of the computer system of the message input device for realizing the embodiment of the present application.
Specific embodiment
With reference to the accompanying drawings and examples the application is described in further detail.It is understood that this place is retouched The specific embodiment stated is used only for explaining related invention, rather than the restriction to this invention.It also should be noted that, in order to It is easy to describe, in accompanying drawing, illustrate only the part related to about invention.
It should be noted that in the case of not conflicting, the embodiment in the application and the feature in embodiment can phases Mutually combine.To describe the application below with reference to the accompanying drawings and in conjunction with the embodiments in detail.
Fig. 1 shows the exemplary system architecture of the embodiment of the data inputting method or device that can apply to the application 100.
As shown in figure 1, system architecture 100 can include terminal unit 101,102,103, network 104 server 105. Network 104 is in order to provide the medium of transmission link between terminal unit 101,102,103 server 105.Network 104 is permissible Including various connection types, such as wired, wireless transmission link or fiber optic cables etc..
User can be interacted with server 105 by network 104 with using terminal equipment 101,102,103, to receive or to send out Send message etc..On terminal unit 101,102,103, various communication applications can be installed, for example, input method class application, browser Class application, searching class application, the application of word processing class etc..
Terminal unit 101,102,103 can be the various electronic equipments having display screen and supporting network service, bag Include but be not limited to smart mobile phone, panel computer, E-book reader, Mp 3 player (moving picture experts Group audio layer iii, dynamic image expert's compression standard audio frequency aspect 3), mp4 (moving picture Experts group audio layer iv, dynamic image expert's compression standard audio frequency aspect 4) player, on knee portable Computer and desk computer etc..
Server 105 can obtain the expression picture of magnanimity, to the input method class application on terminal unit 101,102,103 Send the expression picture of magnanimity.Input method class application in terminal 101,102,103 can record the voice messaging of user input, Set up the corresponding relation of voice messaging and the expression picture of upper screen.
It should be understood that the terminal unit in Fig. 1, the number of network server are only schematically.According to realizing need Will, can have any number of terminal unit, network server.
Refer to Fig. 2, it illustrates the flow process 200 of the embodiment of data inputting method according to the application.Need Illustrate, the data inputting method that the embodiment of the present application is provided can be held by the terminal unit 101,102,103 in Fig. 1 OK, correspondingly, message input device can be arranged in terminal unit 101,102,103.The method comprises the following steps:
Step 201, the voice messaging of receiving user's input.
In the present embodiment, the voice messaging of user is associated with the content of the input area to be input to application.For example, Between user when by instant messaging application chat, need in the input area input content of instant communication applications, can With by voice-input device such as microphone input voice messaging.
Step 202, using the expression picture being associated with voice messaging as candidate result.
In the present embodiment, the expression picture being associated with the voice messaging of user input is included: multiple users are in input The number of times of the input area being input to application in the history input of the voice messaging being associated with voice messaging semanteme is more than number of times The expression picture of threshold value.
In the present embodiment, can be by multiple users in the associated voice messaging of input semanteme, in selection, screen compares Many expression pictures recommend the user of current input voice information as candidate result.
In some optional implementations of the present embodiment, also include: obtain the history input information of multiple users, go through History input information includes: history input in input voice messaging, be input to application input area expression picture;Determine language The associated plurality of voice messaging of justice;It is polymerized the corresponding expression picture of semantic associated plurality of voice messaging;From expression picture In select corresponding input number of times be more than frequency threshold value expression picture.
In the present embodiment, in order to during passing through phonetic entry in user by the expression figure being associated with voice messaging Piece, as candidate result, is recommended expression picture to user, can be pre-build voice messaging and the magnanimity of the user input of magnanimity Expression picture corresponding relation.
In the present embodiment, the user in step 201 may refer to the user of current input voice information.Can pass through Before step 201 receives the voice messaging of active user's input, obtain mass users in advance and once inputted in history input Voice messaging and select to be input to the input area (input area of such as instant messaging application of application during input voice information Domain) the expression picture i.e. expression picture of upper screen.Semantic associated language can be found out from the history input of mass users Message ceases, and obtains multiple voice messaging set.The voice comprising multiple user inputs in each voice messaging set is associated Voice messaging.Meanwhile, when can be polymerized voice messaging in input voice information set for multiple users, the upper screen of selection Expression picture, obtains expression picture set.
It is thus possible to set up the voice messaging set being made up of semantic associated voice messaging and expression picture set Corresponding relation, each voice messaging set corresponds to an expression picture set.Voice messaging set and expression picture set Corresponding relation can represent multiple users in the associated voice messaging of input semanteme, have selected on which expression picture Screen.It is more than frequency threshold value it is possible to further find out upper screen number of times in voice messaging set corresponding expression picture set Expression picture.Find out multiple users in the associated voice messaging of input semanteme, selection is shielded more expression figures Piece.
Pre-build the voice messaging of the user input setting up magnanimity and the expression picture of magnanimity corresponding relation it Afterwards, when the active user in step 201 carries out phonetic entry, can find out related to the voice messaging of active user's input The voice messaging of connection, determines the voice messaging set belonging to the voice messaging being associated with the voice messaging of active user's input. It is then possible to find out the expression figure that upper screen number of times in voice messaging set corresponding expression picture set is more than frequency threshold value Piece.Find out multiple users in the associated voice messaging of input semanteme, the more pictures of upper screen are as candidate result.
For example, multiple users pass through phonetic entry " good spare time ", " this week work is fulfiled ahead of schedule " etc. in history input During the voice messaging that semanteme is associated, the expression picture of upper screen is the expression picture " coffee " of light type, that is, input number of times Expression picture " coffee " more than frequency threshold value.
When the user in step 201 passes through phonetic entry " easily Friday afternoon " in current input, due to " easily Friday afternoon " associate to " good spare time ", the semantic related of " this week work is fulfiled ahead of schedule ", can be by " well not busy ", " this week Work is fulfiled ahead of schedule " expression picture of corresponding upper screen is that expression picture " coffee " recommends current input as candidate result The user of voice messaging.
Step 203, the expression picture that user is selected from candidate result is input in the input area of application.
In the present embodiment, by step 202 using the expression picture being associated with voice messaging as candidate result it Afterwards, can be input in the input area of application with the expression picture that user selects from candidate result.I.e. user can choose Find out multiple users in the voice messaging that input is associated with the semanteme being inputted by step 201, the more figures of upper screen Piece is input in the input area of application as candidate result.
For example, when being chatted by instant messaging application between user, the voice messaging of active user's input is " easily Friday afternoon " believe with the voice such as the voice messaging " good spare time " of multiple user inputs before, " this week work is fulfiled ahead of schedule " Breath is semantically related, and multiple users believe in the voice such as input " good spare time ", " this week work is fulfiled ahead of schedule " before On selecting during breath, the expression picture of screen is expression picture " coffee ", and that is, the number of times of upper screen is more than frequency threshold value, then candidate result Expression picture " coffee " can be comprised.The user currently passing through phonetic entry " easily Friday afternoon " can be from candidate result Expression picture " coffee " is selected to carry out upper screen.
In the present embodiment, above-mentioned steps 201-203 in the present embodiment can be executed by input method.Input method can be When user passes through phonetic entry, precisely understand the semanteme of user speech input, according to the content spoken, emotion, intelligent recommendation The expression picture joined, helps user to carry out the input of quick expression picture, shortens the loaded down with trivial details behaviour that user searches expression picture Make, provide the user facility.
Refer to Fig. 3, it illustrates the flow process 300 of another embodiment of data inputting method according to the application.Need It is noted that the data inputting method that the embodiment of the present application is provided can be held by the terminal unit 101,102,103 in Fig. 1 OK.The method comprises the following steps:
Step 301, the voice messaging of receiving user's input.
In the present embodiment, the voice messaging of user is associated with the content of the input area to be input to application.For example, When user needs in the input area input content of application, can be by voice-input device such as microphone input voice letter Breath.
Step 302, the expression picture that semantics recognition result corresponding with voice messaging is associated is as candidate result.
In the present embodiment, after the voice messaging by step 301 receiving user's input, voice messaging can be entered Row semantics recognition, obtains the corresponding sentence of voice messaging.It is then possible to semantic knowledge is carried out using rule match mode to sentence Not, obtain semantics recognition result.
In the present embodiment, can be so that semantic knowledge be carried out using rule match mode to the corresponding sentence of voice messaging of input Not, obtain semantics recognition result.Semantics recognition result includes: the profile of mood state of the mood of instruction user.For example, it is possible to set in advance Put the rule match masterplate comprising with the key word of the profile of mood state of instruction user, for the mood of the different type of user, if Put the rule match masterplate of corresponding type.When the corresponding sentence of the voice messaging of user input and rule match stencil matching When, then the profile of mood state of the mood of user can be determined according to the type of rule match masterplate.
In the present embodiment, the corresponding relation of each profile of mood state and expression picture can be pre-build.According to each The corresponding relation of kind of profile of mood state and expression picture, determines by carrying out what semantics recognition obtained to the voice messaging of user input The corresponding expression picture of profile of mood state of the mood of user.It is thus possible to using corresponding for profile of mood state expression picture as candidate Result.For example, during user input " good spare time ", semantics recognition can be passed through, identify that the profile of mood state of the mood of user is light Conifer type, can be using the expression picture belonging to light type such as expression picture " coffee " as candidate result.
In the present embodiment, the corresponding relation of magnanimity expression picture and profile of mood state can be set up in the following ways: pre- First obtain multiple expression pictures, multiple expression pictures are labeled, obtain the markup information of expression picture, the mark of expression picture Note information can indicate the profile of mood state of the mood of the corresponding user of expression picture.For example, it is possible to the mood by the mood of user The happy Type division of type is especially happy, more happy etc. subtype.Can be by the use of expression picture and markup information as sample Notebook data, is trained to deep learning model.For example, it is possible to by the expression picture of each happy for profile of mood state subtype and The markup information of the expression picture of each subtype, as sample data, is trained to deep learning model.Using multiple After the markup information of expression picture and multiple expression picture is trained to deep learning model, deep learning model can be learned Practise the feature of expression picture and the corresponding relation of profile of mood state.Can be using the deep learning Model Identification magnanimity table after training The corresponding profile of mood state of feelings picture, sets up the corresponding relation of magnanimity expression picture and profile of mood state.
In the present embodiment, the user in step 301 may refer to the user of current input voice information.Can pass through Before step 301 receives the voice messaging of active user's input, obtain mass users in advance and once inputted in history input Voice messaging and select to be input to the input area (input area of such as instant messaging application of application during input voice information Domain) the expression picture i.e. expression picture of upper screen.Semantic associated language can be found out from the history input of mass users Message ceases, and obtains multiple voice messaging set.The voice comprising multiple user inputs in each voice messaging set is associated Voice messaging.Meanwhile, when can be polymerized voice messaging in input voice information set for multiple users, the upper screen of selection Expression picture, obtains expression picture set.
It is thus possible to set up the voice messaging set being made up of semantic associated voice messaging and expression picture set Corresponding relation, each voice messaging set corresponds to an expression picture set.Voice messaging set and expression picture set Corresponding relation can represent multiple users in the associated voice messaging of input semanteme, have selected on which expression picture Screen.It is more than frequency threshold value it is possible to further find out upper screen number of times in voice messaging set corresponding expression picture set Expression picture.Find out multiple users in the associated voice messaging of input semanteme, selection is shielded more expression figures Piece.
Pre-build the voice messaging of the user input setting up magnanimity and the expression picture of magnanimity corresponding relation it Afterwards, when the active user in step 301 carries out phonetic entry, can find out related to the voice messaging of active user's input The voice messaging of connection, determines the voice messaging set belonging to the voice messaging being associated with the voice messaging of active user's input. It is then possible to find out the expression figure that upper screen number of times in voice messaging set corresponding expression picture set is more than frequency threshold value Piece.Find out multiple users in the associated voice messaging of input semanteme, the more pictures of upper screen are as candidate result.
For example, multiple users pass through phonetic entry " good spare time ", " this week work is fulfiled ahead of schedule " etc. in history input During semantic associated voice messaging, can pass through semantics recognition, the profile of mood state identifying user is light type it is recommended that gently The expression picture of conifer type, as candidate result, comprises expression picture " coffee " in the expression picture of light type.When input is " good The use of the semantic associated voice messaging such as spare time ", " this week work is fulfiled ahead of schedule " selects on expression picture " coffee " per family The expression picture " coffee " of screen in user's selection during screen, can be recorded.
Thus, when the active user in step 301 passes through phonetic entry " easily Friday afternoon ", due to active user Semanteme by " the easily Friday afternoon " of phonetic entry and the voice messaging such as " good spare time ", " this week work is fulfiled ahead of schedule " Associated, can be by the expression picture " coffee of the corresponding upper screen of voice messaging such as " good spare time ", " this week work is fulfiled ahead of schedule " The user currently passing through phonetic entry " easily Friday afternoon " recommended by coffee " as candidate result.
Step 303, the expression picture that user is selected from candidate result is input in the input area of application.
In the present embodiment, by step 302 using the expression picture being associated with voice messaging as candidate result it Afterwards, can be input in the input area of application with the expression picture that user selects from candidate result.I.e. user can choose Find out multiple users in the voice messaging that input is associated with the semanteme being inputted by step 301, the more figures of upper screen Piece is input in the input area of application as candidate result.
For example, when being chatted by instant messaging application between user, the voice messaging of active user's input is " easily Friday afternoon " believe with the voice such as the voice messaging " good spare time " of multiple user inputs before, " this week work is fulfiled ahead of schedule " Breath is semantically related, and multiple users believe in the voice such as input " good spare time ", " this week work is fulfiled ahead of schedule " before On selecting during breath, the expression picture of screen is expression picture " coffee ", and that is, the number of times of upper screen is more than frequency threshold value, then candidate result Expression picture " coffee " can be comprised.Active user can select candidate when by semantic input " easily Friday afternoon " Expression picture " coffee " in result carries out upper screen.
In the present embodiment, above-mentioned steps 301-303 in the present embodiment can be executed by input method.Input method can be When user passes through phonetic entry, precisely understand the semanteme of user speech input, according to the content spoken, emotion, intelligent recommendation The expression picture joined, helps user to carry out the input of quick expression picture, shortens the loaded down with trivial details behaviour that user searches expression picture Make, provide the user facility.
Refer to Fig. 4, as the realization to method shown in above-mentioned each figure, this application provides a kind of message input device One embodiment, this device embodiment is corresponding with the embodiment of the method shown in Fig. 2, and this device specifically can apply to various electricity In sub- equipment.
As shown in figure 4, the message input device 400 of the present embodiment includes: receiving unit 401, choose unit 402, input Unit 403.Wherein, receiving unit 401 is configured to the voice messaging of receiving user's input, voice messaging with to be input to application Input area content be associated;Choose unit 402 to be configured to the expression picture being associated with voice messaging as time Select result, expression picture includes: multiple users are in the history input of the voice messaging that input is associated with voice messaging semanteme The number of times being input to the input area of application is more than the expression picture of frequency threshold value;Input block 403 be configured to by user from The expression picture selecting in candidate result is input in input area.
In some optional implementations of the present embodiment, device 400 also includes: voice recognition unit (not shown), It is configured to, before the expression picture being associated with voice messaging is as candidate result, carry out voice knowledge to voice messaging Not, obtain the corresponding sentence of voice messaging;Semantics recognition unit (not shown), is configured to using rule match mode to sentence Carry out semantics recognition, obtain semantics recognition result, semantics recognition result includes: the profile of mood state of the mood of instruction user;Expression Picture determine unit (not shown), is configured to corresponding for profile of mood state expression picture as the table being associated with voice messaging Feelings picture.
In some optional implementations of the present embodiment, device 400 also includes: information acquisition unit (not shown), It is configured to obtain the markup information of multiple expression pictures, markup information indicates the corresponding profile of mood state of expression picture;Training is single First (not shown), is configured to, by the use of expression picture and markup information as sample data, deep learning model is trained; Expression type identification unit (not shown), the deep learning Model Identification magnanimity expression picture after being configured to using training corresponds to Profile of mood state;Set up unit, be configured to set up the corresponding relation of magnanimity expression picture and profile of mood state.
In some optional implementations of the present embodiment, device 400 also includes: history input information acquiring unit (not shown), is configured to obtain the history input information of multiple users, history before the voice messaging of receiving user's input Input information includes: history input in input voice messaging, be input to application input area expression picture;Association voice Information determination unit (not shown), is configured to determine semantic associated plurality of voice messaging;Expression picture polymerized unit is (not Illustrate), it is configured to be polymerized the corresponding expression picture of semantic associated plurality of voice messaging;Expression picture chooses unit (not Illustrate), it is configured to select the expression picture that corresponding input number of times is more than frequency threshold value from expression picture.
In some optional implementations of the present embodiment, device 400 also includes: input method performance element (does not show Go out), it is configured to the voice messaging using input method receiving user's input.
Fig. 5 shows the structural representation of the computer system being suitable to the message input device for realizing the embodiment of the present application Figure.
As shown in figure 5, computer system 500 includes CPU (cpu) 501, it can be read-only according to being stored in Program in memorizer (rom) 502 or be loaded into program random access storage device (ram) 503 from storage part 508 and Execute various suitable actions and process.In ram503, the system that is also stored with 500 operates required various program datas. Cpu501, rom502 and ram503 are connected with each other by bus 504.Input/output (i/o) interface 505 is also connected to bus 504.
Connected to i/o interface 505 with lower component: include the importation 506 of keyboard, mouse etc.;Penetrate including such as negative electrode Spool (crt), liquid crystal display (lcd) etc. and the output par, c 507 of speaker etc.;Storage part 508 including hard disk etc.; And include the communications portion 509 of the NIC of lan card, modem etc..Communications portion 509 via such as because The network execution communication process of special net.Driver 510 connects to i/o interface 505 also according to needs.Detachable media 511, such as Disk, CD, magneto-optic disk, semiconductor memory etc., are arranged in driver 510, as needed in order to read from it Computer program as needed be mounted into storage part 508.
Especially, in accordance with an embodiment of the present disclosure, the process above with reference to flow chart description may be implemented as computer Software program.For example, embodiment of the disclosure includes a kind of computer program, and it includes being tangibly embodied in machine readable Computer program on medium, described computer program comprises the program code for the method shown in execution flow chart.At this In the embodiment of sample, this computer program can be downloaded and installed from network by communications portion 509, and/or from removable Unload medium 511 to be mounted.
Flow chart in accompanying drawing and block diagram are it is illustrated that according to the system of the various embodiment of the application, method and computer journey The architectural framework in the cards of sequence product, function and operation.At this point, each square frame in flow chart or block diagram can generation A part for one module of table, program segment or code, the part of described module, program segment or code comprises one or more For realizing the executable instruction of the logic function of regulation.It should also be noted that in some realizations as replacement, institute in square frame The function of mark can also be to occur different from the order being marked in accompanying drawing.For example, the square frame that two succeedingly represent is actual On can execute substantially in parallel, they can also execute sometimes in the opposite order, and this is depending on involved function.Also to It is noted that the combination of each square frame in block diagram and/or flow chart and the square frame in block diagram and/or flow chart, Ke Yiyong Execute the function of regulation or the special hardware based system of operation to realize, or can be referred to computer with specialized hardware The combination of order is realizing.
As another aspect, present invention also provides a kind of nonvolatile computer storage media, this non-volatile calculating Machine storage medium can be the nonvolatile computer storage media included in device described in above-described embodiment;Can also be Individualism, without the nonvolatile computer storage media allocated in terminal.Above-mentioned nonvolatile computer storage media is deposited Contain one or more program, when one or more of programs are executed by an equipment so that described equipment: receive The voice messaging of user input, described voice messaging is associated with the content of the input area to be input to application;Will with described As candidate result, described expression picture includes the expression picture that voice messaging is associated: multiple users are in input and institute's predicate The number of times of the input area being input to application in the history input of the voice messaging that sound information semantic is associated is more than frequency threshold value Expression picture;The expression picture that described user is selected from candidate result is input in the input area of application.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.People in the art Member is it should be appreciated that involved invention scope is however it is not limited to the technology of the particular combination of above-mentioned technical characteristic in the application Scheme, also should cover simultaneously in the case of without departing from described inventive concept, be carried out by above-mentioned technical characteristic or its equivalent feature Combination in any and other technical schemes of being formed.Such as features described above has similar work(with (but not limited to) disclosed herein The technical scheme that the technical characteristic of energy is replaced mutually and formed.

Claims (10)

1. a kind of data inputting method is it is characterised in that methods described includes:
The voice messaging of receiving user's input, described voice messaging is associated with the content of the input area to be input to application;
Using the expression picture being associated with described voice messaging as candidate result, described expression picture includes: multiple users exist It is input to the number of times of the input area of application with described voice messaging semanteme in the history input of the voice messaging that input is associated Expression picture more than frequency threshold value;
The expression picture that described user is selected from candidate result is input in the input area of application.
2. method according to claim 1 is it is characterised in that make in the expression picture being associated with described voice messaging Before candidate result, methods described also includes:
Speech recognition is carried out to described voice messaging, obtains the corresponding sentence of voice messaging;
Semantics recognition is carried out to described sentence using rule match mode, obtains semantics recognition result, described semantics recognition result Including: the profile of mood state of the mood of instruction user;
Using corresponding for described profile of mood state expression picture as the expression picture being associated with described voice messaging.
3. method according to claim 2 is it is characterised in that methods described also includes:
Obtain the markup information of multiple expression pictures, described markup information indicates the corresponding profile of mood state of expression picture;
By the use of described expression picture and described markup information as sample data, deep learning model is trained;
Using the corresponding profile of mood state of deep learning Model Identification magnanimity expression picture after training;
Set up the corresponding relation of magnanimity expression picture and profile of mood state.
4. method according to claim 3 is it is characterised in that before the voice messaging of receiving user's input, described side Method also includes:
Obtain the history input information of multiple users, described history input information includes: history input in input voice messaging, It is input to the expression picture of the input area of application;
Determine semantic associated plurality of voice messaging;
It is polymerized the corresponding expression picture of semantic associated plurality of voice messaging;
The expression picture that corresponding input number of times is more than frequency threshold value is selected from described expression picture.
5. method according to claim 4 is it is characterised in that the voice messaging of described receiving user's input includes:
Voice messaging using input method receiving user's input.
6. a kind of message input device is it is characterised in that described device includes:
Receiving unit, is configured to the voice messaging of receiving user's input, described voice messaging and the input to be input to application The content in region is associated;
Choose unit, be configured to the expression picture being associated with described voice messaging as candidate result, described expression figure Piece includes: multiple users are input to application in the history input of the voice messaging that input is associated with described voice messaging semanteme Input area number of times be more than frequency threshold value expression picture;
Input block, the expression picture being configured to select described user from candidate result is input to the input area of application In domain.
7. device according to claim 6 is it is characterised in that described device also includes:
Voice recognition unit, is configured to before the expression picture being associated with described voice messaging is as candidate result, Speech recognition is carried out to described voice messaging, obtains the corresponding sentence of voice messaging;
Semantics recognition unit, is configured to carry out semantics recognition using rule match mode to described sentence, obtains semantics recognition As a result, described semantics recognition result includes: the profile of mood state of the mood of instruction user;
Expression picture determining unit, be configured to using corresponding for described profile of mood state expression picture as with described voice messaging phase The expression picture of association.
8. device according to claim 7 is it is characterised in that described device also includes:
Information acquisition unit, is configured to obtain the markup information of multiple expression pictures, and described markup information indicates expression picture Corresponding profile of mood state;
Training unit, is configured to by the use of described expression picture and described markup information as sample data, to deep learning mould Type is trained;
Expression type identification unit, the corresponding heart of deep learning Model Identification magnanimity expression picture after being configured to using training Feelings type;
Set up unit, be configured to set up the corresponding relation of magnanimity expression picture and profile of mood state.
9. device according to claim 8 is it is characterised in that described device also includes:
History input information acquiring unit, is configured to, before the voice messaging of receiving user's input, obtain multiple users' History input information, described history input information includes: the voice messaging inputting, the input area being input to application in history input The expression picture in domain;
Association voice messaging determining unit, is configured to determine semantic associated plurality of voice messaging;
Expression picture polymerized unit, is configured to be polymerized the corresponding expression picture of semantic associated plurality of voice messaging;
Expression picture chooses unit, is configured to select corresponding input number of times from described expression picture more than frequency threshold value Expression picture.
10. device according to claim 9 is it is characterised in that described device also includes:
Input method performance element, is configured to the voice messaging using input method receiving user's input.
CN201610770021.0A 2016-08-30 2016-08-30 Data inputting method and device Active CN106372059B (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CN201610770021.0A CN106372059B (en) 2016-08-30 2016-08-30 Data inputting method and device
EP17155430.6A EP3291224A1 (en) 2016-08-30 2017-02-09 Method and apparatus for inputting information
KR1020170018938A KR101909807B1 (en) 2016-08-30 2017-02-10 Method and apparatus for inputting information
US15/429,353 US10210865B2 (en) 2016-08-30 2017-02-10 Method and apparatus for inputting information
JP2017023271A JP6718828B2 (en) 2016-08-30 2017-02-10 Information input method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610770021.0A CN106372059B (en) 2016-08-30 2016-08-30 Data inputting method and device

Publications (2)

Publication Number Publication Date
CN106372059A true CN106372059A (en) 2017-02-01
CN106372059B CN106372059B (en) 2018-09-11

Family

ID=57902432

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610770021.0A Active CN106372059B (en) 2016-08-30 2016-08-30 Data inputting method and device

Country Status (5)

Country Link
US (1) US10210865B2 (en)
EP (1) EP3291224A1 (en)
JP (1) JP6718828B2 (en)
KR (1) KR101909807B1 (en)
CN (1) CN106372059B (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106873800A (en) * 2017-02-20 2017-06-20 北京百度网讯科技有限公司 Information output method and device
CN106888158A (en) * 2017-02-28 2017-06-23 努比亚技术有限公司 A kind of instant communicating method and device
CN106886606A (en) * 2017-03-21 2017-06-23 联想(北京)有限公司 Method and system for recommending expression according to user speech
CN107153496A (en) * 2017-07-04 2017-09-12 北京百度网讯科技有限公司 Method and apparatus for inputting emotion icons
CN107450746A (en) * 2017-08-18 2017-12-08 联想(北京)有限公司 A kind of insertion method of emoticon, device and electronic equipment
CN107479723A (en) * 2017-08-18 2017-12-15 联想(北京)有限公司 A kind of insertion method of emoticon, device and electronic equipment
CN107609092A (en) * 2017-09-08 2018-01-19 北京百度网讯科技有限公司 Intelligent response method and apparatus
CN108335226A (en) * 2018-02-08 2018-07-27 江苏省农业科学院 Agriculture Germplasm Resources Information real-time intelligent acquisition system
CN108733651A (en) * 2018-05-17 2018-11-02 新华网股份有限公司 Emoticon prediction technique and model building method, device, terminal
CN109033423A (en) * 2018-08-10 2018-12-18 北京搜狗科技发展有限公司 Simultaneous interpretation caption presentation method and device, intelligent meeting method, apparatus and system
CN109213332A (en) * 2017-06-29 2019-01-15 北京搜狗科技发展有限公司 A kind of input method and device of expression picture
CN109525725A (en) * 2018-11-21 2019-03-26 三星电子(中国)研发中心 A kind of information processing method and device based on emotional state
CN109683726A (en) * 2018-12-25 2019-04-26 北京微播视界科技有限公司 Characters input method, device, electronic equipment and storage medium
CN109814730A (en) * 2017-11-20 2019-05-28 北京搜狗科技发展有限公司 Input method and device, the device for input
CN110019885A (en) * 2017-08-01 2019-07-16 北京搜狗科技发展有限公司 A kind of expression data recommended method and device
CN110149549A (en) * 2019-02-26 2019-08-20 腾讯科技(深圳)有限公司 The display methods and device of information
CN110910898A (en) * 2018-09-15 2020-03-24 华为技术有限公司 Voice information processing method and device
CN111489131A (en) * 2019-01-25 2020-08-04 北京搜狗科技发展有限公司 Information recommendation method and device
CN113297359A (en) * 2021-04-23 2021-08-24 阿里巴巴新加坡控股有限公司 Information interaction method and device

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10057358B2 (en) * 2016-12-09 2018-08-21 Paypal, Inc. Identifying and mapping emojis
US11310176B2 (en) * 2018-04-13 2022-04-19 Snap Inc. Content suggestion system
CN109697290B (en) * 2018-12-29 2023-07-25 咪咕数字传媒有限公司 Information processing method, equipment and computer storage medium
WO2020166495A1 (en) * 2019-02-14 2020-08-20 ソニー株式会社 Information processing device, information processing method, and information processing program
KR102381737B1 (en) * 2019-03-22 2022-03-31 박정길 Voice Emoticon Editing Method and Media Being Recorded with Program Executing Voice Emoticon Editing Method
US11335360B2 (en) * 2019-09-21 2022-05-17 Lenovo (Singapore) Pte. Ltd. Techniques to enhance transcript of speech with indications of speaker emotion
CN113051427A (en) * 2019-12-10 2021-06-29 华为技术有限公司 Expression making method and device
CN112148133B (en) * 2020-09-10 2024-01-23 北京百度网讯科技有限公司 Method, device, equipment and computer storage medium for determining recommended expression
KR102343036B1 (en) * 2021-02-10 2021-12-24 주식회사 인피닉 Annotation method capable of providing working guides, and computer program recorded on record-medium for executing method thereof
KR102310587B1 (en) * 2021-02-10 2021-10-13 주식회사 인피닉 Method of generating skeleton data for consecutive images, and computer program recorded on record-medium for executing method thereof
KR102310588B1 (en) * 2021-02-10 2021-10-13 주식회사 인피닉 Method of generating skeleton data for artificial intelligence learning, and computer program recorded on record-medium for executing method thereof
KR20230033208A (en) * 2021-08-30 2023-03-08 박정훈 System and method for communication service using facial expressions learned from images of companion animal
US11657558B2 (en) * 2021-09-16 2023-05-23 International Business Machines Corporation Context-based personalized communication presentation
WO2023141887A1 (en) * 2022-01-27 2023-08-03 Oppo广东移动通信有限公司 Semantic communication transmission method and terminal device
CN115641837A (en) * 2022-12-22 2023-01-24 北京资采信息技术有限公司 Intelligent robot conversation intention recognition method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101183294A (en) * 2007-12-17 2008-05-21 腾讯科技(深圳)有限公司 Expression input method and apparatus
US20130159919A1 (en) * 2011-12-19 2013-06-20 Gabriel Leydon Systems and Methods for Identifying and Suggesting Emoticons
CN104076944A (en) * 2014-06-06 2014-10-01 北京搜狗科技发展有限公司 Chat emoticon input method and device
CN104298429A (en) * 2014-09-25 2015-01-21 北京搜狗科技发展有限公司 Information presentation method based on input and input method system
US20150100537A1 (en) * 2013-10-03 2015-04-09 Microsoft Corporation Emoji for Text Predictions
US20160224687A1 (en) * 2011-12-12 2016-08-04 Empire Technology Development Llc Content-based automatic input protocol selection

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003129711A (en) 2001-10-26 2003-05-08 Matsushita Electric Works Ltd Electric lock control device
JP2006048352A (en) * 2004-08-04 2006-02-16 Matsushita Electric Ind Co Ltd Communication terminal having character image display function and control method therefor
JP2007003669A (en) 2005-06-22 2007-01-11 Murata Mach Ltd Document creating device
US7983910B2 (en) * 2006-03-03 2011-07-19 International Business Machines Corporation Communicating across voice and text channels with emotion preservation
JP2008129711A (en) 2006-11-17 2008-06-05 Sky Kk Input character conversion system
KR100902861B1 (en) 2007-11-20 2009-06-16 경원대학교 산학협력단 Mobile communication terminal for outputting voice received text message to voice using avatar and Method thereof
KR100941598B1 (en) 2007-11-27 2010-02-11 (주)씨앤에스 테크놀로지 telephone communication system and method for providing users with telephone communication service comprising emotional contents effect
US8756527B2 (en) * 2008-01-18 2014-06-17 Rpx Corporation Method, apparatus and computer program product for providing a word input mechanism
JP2009277015A (en) * 2008-05-14 2009-11-26 Fujitsu Ltd Input support program, input support apparatus and input support method
US8351581B2 (en) * 2008-12-19 2013-01-08 At&T Mobility Ii Llc Systems and methods for intelligent call transcription
US9015033B2 (en) * 2010-10-26 2015-04-21 At&T Intellectual Property I, L.P. Method and apparatus for detecting a sentiment of short messages
JP5928449B2 (en) * 2011-04-26 2016-06-01 日本電気株式会社 Input assist device, input assist method, and program
KR20130069263A (en) * 2011-12-18 2013-06-26 인포뱅크 주식회사 Information processing method, system and recording medium
WO2013094982A1 (en) 2011-12-18 2013-06-27 인포뱅크 주식회사 Information processing method, system, and recoding medium
US20140108308A1 (en) * 2012-07-13 2014-04-17 Social Data Technologies, LLC System and method for combining data for identifying compatibility
US20150262238A1 (en) * 2014-03-17 2015-09-17 Adobe Systems Incorporated Techniques for Topic Extraction Using Targeted Message Characteristics
CN106462513B (en) * 2014-06-30 2019-05-28 歌乐株式会社 Information processing system and car-mounted device
JP6122816B2 (en) * 2014-08-07 2017-04-26 シャープ株式会社 Audio output device, network system, audio output method, and audio output program
US10924444B2 (en) * 2014-12-02 2021-02-16 Facebook, Inc. Device, method, and graphical user interface for managing customer relationships using a lightweight messaging platform
KR101733011B1 (en) * 2015-06-18 2017-05-08 라인 가부시키가이샤 Apparatus for providing recommendation based social network service and method using the same
US20180077095A1 (en) * 2015-09-14 2018-03-15 X Development Llc Augmentation of Communications with Emotional Data
US10157224B2 (en) * 2016-02-03 2018-12-18 Facebook, Inc. Quotations-modules on online social networks

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101183294A (en) * 2007-12-17 2008-05-21 腾讯科技(深圳)有限公司 Expression input method and apparatus
US20160224687A1 (en) * 2011-12-12 2016-08-04 Empire Technology Development Llc Content-based automatic input protocol selection
US20130159919A1 (en) * 2011-12-19 2013-06-20 Gabriel Leydon Systems and Methods for Identifying and Suggesting Emoticons
CN104335607A (en) * 2011-12-19 2015-02-04 机械地带有限公司 Systems and methods for identifying and suggesting emoticons
US20150100537A1 (en) * 2013-10-03 2015-04-09 Microsoft Corporation Emoji for Text Predictions
CN104076944A (en) * 2014-06-06 2014-10-01 北京搜狗科技发展有限公司 Chat emoticon input method and device
CN104298429A (en) * 2014-09-25 2015-01-21 北京搜狗科技发展有限公司 Information presentation method based on input and input method system

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106873800A (en) * 2017-02-20 2017-06-20 北京百度网讯科技有限公司 Information output method and device
CN106888158A (en) * 2017-02-28 2017-06-23 努比亚技术有限公司 A kind of instant communicating method and device
CN106886606A (en) * 2017-03-21 2017-06-23 联想(北京)有限公司 Method and system for recommending expression according to user speech
CN109213332A (en) * 2017-06-29 2019-01-15 北京搜狗科技发展有限公司 A kind of input method and device of expression picture
CN107153496A (en) * 2017-07-04 2017-09-12 北京百度网讯科技有限公司 Method and apparatus for inputting emotion icons
US10984226B2 (en) 2017-07-04 2021-04-20 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and apparatus for inputting emoticon
CN107153496B (en) * 2017-07-04 2020-04-28 北京百度网讯科技有限公司 Method and device for inputting emoticons
CN110019885B (en) * 2017-08-01 2021-10-15 北京搜狗科技发展有限公司 Expression data recommendation method and device
CN110019885A (en) * 2017-08-01 2019-07-16 北京搜狗科技发展有限公司 A kind of expression data recommended method and device
CN107479723A (en) * 2017-08-18 2017-12-15 联想(北京)有限公司 A kind of insertion method of emoticon, device and electronic equipment
CN107450746A (en) * 2017-08-18 2017-12-08 联想(北京)有限公司 A kind of insertion method of emoticon, device and electronic equipment
CN107609092A (en) * 2017-09-08 2018-01-19 北京百度网讯科技有限公司 Intelligent response method and apparatus
CN107609092B (en) * 2017-09-08 2021-03-09 北京百度网讯科技有限公司 Intelligent response method and device
CN109814730A (en) * 2017-11-20 2019-05-28 北京搜狗科技发展有限公司 Input method and device, the device for input
CN109814730B (en) * 2017-11-20 2023-09-12 北京搜狗科技发展有限公司 Input method and device and input device
CN108335226A (en) * 2018-02-08 2018-07-27 江苏省农业科学院 Agriculture Germplasm Resources Information real-time intelligent acquisition system
CN108733651A (en) * 2018-05-17 2018-11-02 新华网股份有限公司 Emoticon prediction technique and model building method, device, terminal
CN109033423A (en) * 2018-08-10 2018-12-18 北京搜狗科技发展有限公司 Simultaneous interpretation caption presentation method and device, intelligent meeting method, apparatus and system
CN110910898A (en) * 2018-09-15 2020-03-24 华为技术有限公司 Voice information processing method and device
CN109525725A (en) * 2018-11-21 2019-03-26 三星电子(中国)研发中心 A kind of information processing method and device based on emotional state
CN109525725B (en) * 2018-11-21 2021-01-15 三星电子(中国)研发中心 Information processing method and device based on emotional state
CN109683726A (en) * 2018-12-25 2019-04-26 北京微播视界科技有限公司 Characters input method, device, electronic equipment and storage medium
CN109683726B (en) * 2018-12-25 2022-08-05 北京微播视界科技有限公司 Character input method, character input device, electronic equipment and storage medium
CN111489131A (en) * 2019-01-25 2020-08-04 北京搜狗科技发展有限公司 Information recommendation method and device
CN110149549A (en) * 2019-02-26 2019-08-20 腾讯科技(深圳)有限公司 The display methods and device of information
CN113297359A (en) * 2021-04-23 2021-08-24 阿里巴巴新加坡控股有限公司 Information interaction method and device
CN113297359B (en) * 2021-04-23 2023-11-28 阿里巴巴新加坡控股有限公司 Method and device for information interaction

Also Published As

Publication number Publication date
US20180061407A1 (en) 2018-03-01
JP6718828B2 (en) 2020-07-08
EP3291224A1 (en) 2018-03-07
JP2018036621A (en) 2018-03-08
US10210865B2 (en) 2019-02-19
KR101909807B1 (en) 2018-10-18
KR20180025121A (en) 2018-03-08
CN106372059B (en) 2018-09-11

Similar Documents

Publication Publication Date Title
CN106372059B (en) Data inputting method and device
US11601552B2 (en) Hierarchical interface for adaptive closed loop communication system
US11721356B2 (en) Adaptive closed loop communication system
US8738375B2 (en) System and method for optimizing speech recognition and natural language parameters with user feedback
CN105719649B (en) Audio recognition method and device
US20210020165A1 (en) Alert generator for adaptive closed loop communication system
CN109767765A (en) Talk about art matching process and device, storage medium, computer equipment
CN109101545A (en) Natural language processing method, apparatus, equipment and medium based on human-computer interaction
US9984679B2 (en) System and method for optimizing speech recognition and natural language parameters with user feedback
CN106503236A (en) Question classification method and device based on artificial intelligence
CN105654950A (en) Self-adaptive voice feedback method and device
CN110956956A (en) Voice recognition method and device based on policy rules
CN108121800A (en) Information generating method and device based on artificial intelligence
CN108268450B (en) Method and apparatus for generating information
CN107526809A (en) Method and apparatus based on artificial intelligence push music
CN109739605A (en) The method and apparatus for generating information
CN108038243A (en) Music recommends method, apparatus, storage medium and electronic equipment
US20210021709A1 (en) Configurable dynamic call routing and matching system
CN110059172A (en) The method and apparatus of recommendation answer based on natural language understanding
WO2024005944A1 (en) Meeting attendance prompt
US20220207066A1 (en) System and method for self-generated entity-specific bot
CN108717851A (en) A kind of audio recognition method and device
US11146678B2 (en) Determining the context of calls
Bisser et al. Introduction to the microsoft conversational ai platform
JP6087704B2 (en) Communication service providing apparatus, communication service providing method, and program

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant