CN106453823B - method, device and terminal for quickly sending information - Google Patents

method, device and terminal for quickly sending information Download PDF

Info

Publication number
CN106453823B
CN106453823B CN201610796357.4A CN201610796357A CN106453823B CN 106453823 B CN106453823 B CN 106453823B CN 201610796357 A CN201610796357 A CN 201610796357A CN 106453823 B CN106453823 B CN 106453823B
Authority
CN
China
Prior art keywords
gesture
input
expression
preset
expression information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610796357.4A
Other languages
Chinese (zh)
Other versions
CN106453823A (en
Inventor
查文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201610796357.4A priority Critical patent/CN106453823B/en
Publication of CN106453823A publication Critical patent/CN106453823A/en
Application granted granted Critical
Publication of CN106453823B publication Critical patent/CN106453823B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72469User interfaces specially adapted for cordless or mobile telephones for operating the device by selecting functions from two or more displayed items, e.g. menus or icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72436User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for text messaging, e.g. short messaging services [SMS] or e-mails
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Environmental & Geological Engineering (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses methods, devices and systems for quickly sending information, which acquire and recognize operation gestures input by a user to obtain preset gestures matched with the input gestures, and output the information based on the mapping relation between the preset gestures and the information.

Description

method, device and terminal for quickly sending information
Technical Field
The invention relates to an interaction scheme of a user and a mobile device in an instant messaging process, in particular to methods, devices and terminals for quickly sending information based on user gestures.
Background
The main mode of man-machine interaction of the mobile terminal is that the machine responds correspondingly after sensing input information of a user. In the prior art, there are many ways for a mobile device to obtain user operation and environmental status information, for example, obtaining a user key operation through a key, obtaining a user gesture through a touch screen, obtaining a light level of a current environment through a photosensitive sensor, obtaining location information through a GPS, and the like. And after the corresponding data, the mobile terminal makes corresponding feedback according to the preset instruction of the application.
Instant messaging is the most common communication mode in the internet era, and common applications include WeChat, mobile QQ and the like. In the instant communication process, besides the character information, the expression information is also sent among the users.
The expression information refers to static images, dynamic images and the like used for transmitting emotional moods in various instant chatting processes. The appearance of the expression information greatly enriches the content of the chat, so that the chat is not monotonous in word description any more, and particularly, the chat becomes rich and colorful and full of fun due to the appearance of various humorous dynamic pictures.
The inventor finds that the necessary function of the communication software is sending in the process of research and development and useThe interactive process of sending emotions becomes more cumbersome, for example, the user wants to send emotions representing "OK
Figure GDA0002237357970000011
The following steps are required:
at step , the emoticon panel is opened by clicking the emoticon button.
And secondly, turning pages until a button of the 'OK' expression is found in a page turning or label page clicking mode.
And thirdly, clicking 'OK' to send the expression to a conversation window, and clicking a sending button to send the expression.
If the user only wants to send simple, frequently used expressions during the instant messaging process, such a process is cumbersome, time consuming, and inefficient, which is certainly undesirable user experiences.
Disclosure of Invention
In order to solve the technical problems in the prior art, the invention provides methods for quickly sending expression information, wherein the expression can be quickly sent by only executing simple gesture actions by defining the quick sending actions of the expression, and the methods comprise the following steps:
detecting input operation of a user, and acquiring operation characteristics corresponding to the input operation; inquiring a preset gesture matched with the input action according to the operation characteristics; extracting expression information corresponding to the input action according to the mapping relation between the preset gesture and the expression information; and sending the expression information.
Preferably, the acquiring the operation characteristics corresponding to the input action includes: continuously acquiring an input position of the input action and a time value corresponding to the input position; and obtaining an input track corresponding to the input action according to the input positions which are continuous in time sequence.
Preferably, according to the operation characteristics, querying a preset gesture matched with the input gesture comprises: judging whether a gesture matched with the input track of the operation characteristic exists in preset gestures; and if the preset gesture exists, determining the preset gesture as a gesture matched with the touch screen operation, and if the preset gesture does not exist, judging that the input is invalid if no corresponding target gesture exists.
Preferably, a mapping relationship between the preset gesture and the expression information is preset.
Preferably, the preset mapping relationship between the preset gesture and the expression information includes: establishing a standardized gesture-expression mapping table; and defining the mapping relation between the preset gesture and the expression information by using the standardized gesture-expression mapping table.
Preferably, the preset mapping relationship between the preset gesture and the expression information includes: counting the use frequency of the expression information, if the use frequency of the expression information is higher than a preset value, associating a preset gesture for the expression, and establishing a gesture-expression mapping table based on statistics; and defining the mapping relation between the preset gesture and the expression information by using the gesture-expression mapping table based on statistics.
Preferably, the mapping relation between the preset gesture and the expression information is updated based on user input.
Preferably, updating the mapping relationship between the preset gesture and the expression information based on the user input comprises: receiving an input expression deleting operation, searching an expression corresponding to the expression deleting operation in a preset gesture-expression mapping table, and deleting the corresponding expression in the gesture-expression mapping table.
Preferably, updating the mapping relationship between the preset gesture and the expression information based on the user input comprises: receiving input expression increasing operation, searching an expression corresponding to the expression increasing operation in a preset gesture-expression mapping table, if the expression cannot be searched, detecting the input operation of a user, acquiring operation characteristics corresponding to the input operation, associating the operation characteristics with the expression corresponding to the expression increasing operation, and adding the operation characteristics to the gesture-expression mapping table.
Preferably, updating the mapping relationship between the preset gesture and the expression information based on the user input comprises: receiving an input expression changing operation, searching an expression corresponding to the expression changing operation in a preset gesture-expression mapping table, if the expression is found, detecting the input operation of a user, acquiring an operation characteristic corresponding to the input operation, associating the operation characteristic with the expression corresponding to the expression increasing operation, and replacing the expression information corresponding to the expression changing operation in the gesture-expression mapping table.
Preferably, the mapping relationship between the preset gesture and the expression is stored through a gesture-expression mapping table, and the mapping table is synchronized among multiple devices.
Preferably, the mapping relationship between the preset gesture and the expression is stored through a gesture-expression mapping table, and the mapping table is bound with the user ID.
Preferably, the input operation area of the user comprises a terminal interface in a screen locking state.
The invention also provides devices for quickly sending expression information, which comprise a feature acquisition module, an inquiry matching module, an expression extraction module and an expression sending module, wherein the feature acquisition module is used for detecting the input operation of a user and acquiring the operation feature corresponding to the input operation, the inquiry matching module is used for inquiring the preset gesture matched with the input action according to the operation feature, the expression extraction module is used for extracting the expression information corresponding to the input action according to the mapping relation between the preset gesture and the expression information, and the expression sending module is used for sending the expression information.
Preferably, the feature acquisition module includes: the position recording module and the timer are used for continuously acquiring an input position of the input action and a time value corresponding to the input position; and the track calculation module is used for obtaining an input track corresponding to the input action according to the input positions which are continuous in time sequence.
Preferably, the query matching module includes: a matching judgment module: the gesture matching with the input track of the operation characteristic exists in the preset gestures; and if the preset gesture exists, determining the preset gesture as a gesture matched with the touch screen operation, and if the preset gesture does not exist, judging that the input is invalid if no corresponding target gesture exists.
Preferably, the device further comprises a presetting module for presetting the mapping relationship between the preset gesture and the expression information.
Preferably, the preset module comprises a standardization module used for establishing a standardized gesture-expression mapping table, and an mapping relation definition module used for defining the mapping relation between the preset gesture and the expression information by using the standardized gesture-expression mapping table.
Preferably, the presetting module comprises: the statistical association module is used for counting the use frequency of the expression information, associating preset gestures for the expressions if the use frequency of the expression information is higher than a preset value, and establishing a gesture-expression mapping table based on statistics; and the second mapping relation definition module is used for defining the mapping relation between the preset gesture and the expression information by using the gesture-expression mapping table based on statistics.
Preferably, the device includes an updating module for updating the mapping relationship between the preset gesture and the expression information based on the user input.
Preferably, the update module further comprises: and the expression deleting submodule is used for receiving an input expression deleting operation, searching an expression corresponding to the expression deleting operation in a preset gesture-expression mapping table, and deleting the corresponding expression in the gesture-expression mapping table.
Preferably, the update module further comprises: and the expression increasing submodule is used for receiving input expression increasing operation, searching an expression corresponding to the expression increasing operation in a preset gesture-expression mapping table, if the expression cannot be searched, detecting the input operation of a user, acquiring an operation characteristic corresponding to the input operation, associating the operation characteristic with the expression corresponding to the expression increasing operation, and adding the operation characteristic to the gesture-expression mapping table.
Preferably, the update module further comprises: and the expression changing submodule is used for receiving an input expression changing operation, searching an expression corresponding to the expression changing operation in a preset gesture-expression mapping table, detecting the input operation of a user if the expression is found, acquiring an operation characteristic corresponding to the input operation, associating the operation characteristic with the expression corresponding to the expression increasing operation, and replacing the expression information corresponding to the expression changing operation in the gesture-expression mapping table.
Preferably, the device includes a synchronization module configured to synchronize a gesture-expression mapping table among multiple devices, where the gesture-expression mapping table stores a mapping relationship between the preset gesture and the expression.
Preferably, the apparatus includes a user ID binding module, configured to bind a gesture-expression mapping table with a user ID for use on a multi-device, where the gesture-expression mapping table stores a mapping relationship between the preset gesture and an expression.
Preferably, the input operation area of the user comprises a terminal interface in a screen locking state.
The invention also provides terminal devices, wherein the terminal device executes the method or comprises the device.
The invention has the beneficial effects that: when a user inputs an expression, the user does not need to open an expression panel, directly draws a track in a dialogue area by hand, and the system judges the track and executes a corresponding sending command, namely, the expression can be quickly sent. The method is fast and convenient, and can bring good user experience.
Drawings
The following detailed description of an embodiment of the invention is provided in conjunction with the accompanying drawings;
fig. 1 is a flowchart of a method for quickly sending facial expression information in embodiment of the present invention.
Fig. 2 is a block diagram of a system for quickly sending facial expression information according to a second embodiment of the present invention.
Fig. 3 is a flowchart of a method for creating a gesture-expression mapping table based on a statistical method according to a third embodiment of the present invention.
Fig. 4 is a schematic diagram of a gesture-expression mapping table synchronization system based on ID binding according to a fourth embodiment of the present invention.
Fig. 5 is a flowchart of a method of expression sending substeps according to a fifth embodiment of the present invention.
Fig. 6 is a system block diagram of each sub-module constituting an expression sending module according to the fifth embodiment of the present invention.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Example :
the embodiment provides methods for quickly sending expression information, as shown in fig. 1, including the following steps:
step S101, detecting an input action of a user, and acquiring an operation characteristic corresponding to the input action.
The input action is performed in an area allowing the user to perform an input operation, which is a designated input area, and may be a part or all of the touch screen. The conversation area can also be a terminal input interface in a screen locking state, when a user sees message preview in the interface in the screen locking state, gestures can be directly input in the screen locking interface, and quick reply is realized under the condition that the terminal is not unlocked.
The input actions refer to input performed by a user in an input interface in a time sequence, and the input actions are sensed and received by hardware and converted into sensor signals.
The operation characteristic refers to a set of parameter values related to an input action, such as a time value of a touch screen operation, a touch screen position corresponding to the time, and a touch screen strength, which are obtained by the device according to the detected user input action. And according to the touch screen time value, the touch screen position and the touch screen strength, the number of fingers and the touch screen track of the touch screen can be obtained.
During the input period of the user, the equipment continuously acquires and records the time value of the user input contact, and the position and the force of the touch screen contact corresponding to the time value; and obtaining the operation characteristics corresponding to the input gestures according to the recording result.
Continuously acquiring an input position of the input action and a time value corresponding to the input position under the condition of neglecting the contact force; and obtaining an input track corresponding to the input action according to the input positions which are continuous in time sequence.
And S102, recognizing a preset gesture matched with the input gesture according to the operation characteristics.
The identification process comprises the following steps: judging whether a gesture matched with the input track of the operation characteristic exists in preset gestures; and if the preset gesture exists, determining the preset gesture as a gesture matched with the touch screen operation, and if the preset gesture does not exist, judging that the input is invalid if no corresponding target gesture exists.
In specific implementation processes, the operation characteristic of the input gesture is captured, based on the device continuously acquiring and recording the time value of the user input gesture, and the position and the strength of the touch screen contact point corresponding to the time value, the touch screen track of the extracted operation characteristic is √ shaped, the touch screen track with the √ shaped shape is searched in the gesture-expression mapping table, and if the touch screen track with the √ shaped shape is in the mapping table, the preset gesture corresponding to the √ shaped touch screen track is recognized as the gesture matched with the input gesture.
Table 1 gesture-expression mapping table
Figure GDA0002237357970000061
And S103, obtaining expression information corresponding to the input gesture according to the mapping relation between the preset gesture and the expression information, inquiring the preset gesture matched with the input gesture of the user, namely finding the expression corresponding to the gesture of the user, wherein in the gesture-expression mapping table, the gesture and the expression information are in the corresponding relation of , and obtaining the expression information corresponding to the input gesture by taking the input gesture as a searching condition.
In specific implementation processes, after the preset gesture corresponding to the "√" shaped touch screen trajectory is recognized as a gesture matching the input gesture, the preset gesture is used as a gesture matching the input gesture through the gesture-expression mapping tableThe expression corresponding to the gesture input by the user is
Figure GDA0002237357970000062
And step S104, sending the expression information.
And after obtaining the expression information, directly sending the expression information. Namely, input of the emotion information into the dialog box is performed, and a transmission command is performed. Therefore, the user can directly realize the sending of the expression information only by inputting a simple gesture command on the user interface of the terminal
The user experience process of the four steps is that the user inputs the 'V' in the input interface, namely the user outputs the output which the user wants to output in the chat interface
Figure GDA0002237357970000063
This is certainly fast and efficient.
Of course, in , in order to avoid the wrong input message being sent out directly, it may be set that the emoticon is input into the dialog box and the message is sent out manually by the user.
Example two:
the embodiment provides apparatuses for quickly sending facial expression information, as shown in fig. 2 and 3, including the following modules:
and the characteristic acquisition module is used for detecting the input action of the user in the session area and acquiring the operation characteristic corresponding to the input action.
The conversation area refers to an area for allowing a user to perform an input operation, and the area is a designated input area and may be a part or all of the touch screen.
The operation characteristic refers to a set of parameter values related to the input gesture, such as a time value of the touch screen operation, a touch screen position corresponding to the time, and a touch screen strength, which are obtained by the device according to the detected user input gesture. And according to the touch screen time value, the touch screen position and the touch screen strength, the number of fingers and the touch screen track of the touch screen can be obtained.
During the input of a user, continuously acquiring and recording a time value of a gesture input by the user, and a position and a touch point strength of a touch screen corresponding to the time value by the equipment; and obtaining the operation characteristics corresponding to the input gestures according to the recording result.
And the query matching module is used for identifying the preset gesture matched with the input gesture according to the operation characteristics. The identification process comprises the following steps: judging whether the operation characteristics are the same as the touch screen track of a preset gesture; and if the preset gesture is the gesture matched with the touch screen operation, identifying the preset gesture as the gesture matched with the touch screen operation, otherwise, inputting the input data ineffectively without a corresponding target gesture.
During the input process, the user can update the 'gesture-expression mapping table' based on an updating module, wherein the updating module comprises: deleting the sub-module, adding the sub-module, changing the sub-module, correspondingly deleting the entries in the table, adding the entries in the table, and changing the gestures corresponding to the expressions in the table.
Deleting the entries in the table comprises inputting the expression information to be deleted, searching the entries to be deleted by taking the expression information as an index, and executing deletion operation if the corresponding entries are found.
For example, if the user wants to delete the entry information of "smile" in the table, the user selects the expression information of "smile", searches for the corresponding entry by using the expression as an index, executes a deletion command, and deletes the corresponding entry content from the "gesture-expression mapping table".
Figure GDA0002237357970000071
Adding the items in the table comprises selecting the expression items to be added, judging whether the gestures of the expression are preset or not, inputting the gestures matched with the added expressions if the gestures of the expression are not preset, and adding the expression items and the gestures into a 'gesture-expression mapping table'.
For example, if the user wants to add the item information of "parent" in the table, the expression information of "parent" is selected, whether the expression has a preset gesture is judged, if not, a gesture matched with the added expression is input, and the expression item and the gesture are added into the "gesture-expression mapping table".
Figure GDA0002237357970000072
And the items in the modification table comprise selecting expression items to be modified, inputting user-defined preset gestures, judging whether the gestures are preset in the expression or not, and if so, replacing the gestures in the gesture-expression mapping table by the user-defined preset gestures.
For example, if the user wants to change the item information of "OK" in the table, the expression information of "OK" is selected, a user-defined preset gesture "X" is input, and the user-defined preset gesture "X" is used to replace the gesture "√ in the" gesture-expression mapping table ".
Figure GDA0002237357970000081
After the above operation is performed, the "gesture-expression mapping table" is updated to:
the preset gesture matched with the input gesture of the user is inquired, the expression corresponding to the gesture of the user is found, because the gesture and the expression information are in the relation corresponding to in the gesture-expression mapping table, the input gesture is taken as a searching condition, and the expression information corresponding to the input gesture can be obtained.
In specific embodiments, after the preset gesture corresponding to the "X" shaped touch screen trajectory is recognized as a gesture matching the input gesture, the gesture corresponding to the user input gesture is an expression corresponding to the gesture input by the gesture-expression mapping table
And the expression sending module is used for sending the expression information.
And after obtaining the expression information, directly sending the expression information. Namely, input of the emotion information into the dialog box is performed, and a transmission command is performed. Therefore, the user can directly realize the sending of the expression information only by inputting a simple gesture command on the user interface of the terminal
The user experience process of the four steps is that the user inputs 'X' in the input interface, namely outputs the output which the user wants to output in the chat interface
Figure GDA0002237357970000091
Moreover, the user can update the sent shortcut gesture at any time according to own habits, which is undoubtedly quick and efficient.
Of course, in embodiments, to avoid the wrong input message being sent out directly, the message may be sent out manually by the user by setting the emotions to be input into the dialog box.
Example three:
the gesture-expression mapping table is a data organization form used for storing mapping relations between gestures and expression information, wherein data in the data organization form can be updated according to definition data of a user.
In the embodiment, the establishment of the gesture-expression mapping table comprises the steps of establishing a standardized gesture-expression mapping table, wherein the standardized gesture-expression mapping table is a mapping table of initial states, has a definition format of a system and a gesture-expression mapping relation defined by a system , and initializes that gestures defined in the mapping table are mainly related to the meanings of expressions.
The method comprises the steps of establishing a gesture-expression mapping table based on statistics during the use process of a user, counting the use frequency of expression information, associating preset gestures with the expressions if the use frequency of the expression information is higher than a preset value, and counting the number of output expression information which is associated with a user ID and related to device input within a set time period of .
After the mapping table is established, the preset gesture-expression mapping table is used for defining the mapping relation between the preset gesture and the expression information.
Example four:
in the embodiment, in the using process of the user, a user ID is bound with the using habit of the user, and when the user logs in, such as WeChat and a mobile phone QQ, the system automatically calls a gesture-expression mapping table bound with the login ID.
When a user updates the gesture-expression mapping table in the terminal in the using process, the server side can also regularly detect the updating condition of the mapping table in the user equipment, and the updated mapping table is synchronized to the server side for backup.
When a user often uses multiple devices, for example, as shown in fig. 4, the user may switch between mobile phones and tablet computers, and in addition to server synchronization, the user may synchronize the gesture-expression mapping table with all devices under, for example, Wifi or lan through lan synchronization technology.
Of course, in specific embodiments of , it is not only emotion information that can be set up to be sent quickly, but also some common symbols or characters can be sent quickly by gestures in a manner of being edited in advance and stored in the mapping table during the use of the user.
In specific embodiments , the user tracing track is not limited to the touch screen operation of the mobile terminal, but can also be described in a personal PC or a notebook computer using a mouse or a touch pad, for example, and the same method and device can realize the quick transmission of information in the whole process.
Example five:
for quickly sending messagesAnd the shortcut gesture is seamlessly connected with the sending process, and the expression is output when the user inputs the gesture. In the instant chat process, due to the existence of instantaneity, the possibility of wrong input often exists, for example, a user wants to send an expression representing 'OK' by using the gesture 'V', but the gesture 'X' can be used for expressing the expression by mistake and carelessly
Figure GDA0002237357970000101
This is disadvantageous for communication, and although there is a revocation operation in the WeChat, the information sent is often already received by the other party, possibly causing misunderstandings, which may create an undesirable user experience.
For this reason, in the present embodiment, as shown in fig. 5, the preview and delayed transmission functions are set. After the user inputs the shortcut gesture, an expression preview is firstly formed on the user interface, but the expression is not sent but enters the countdown of sending, and if the user does not perform the pause operation in the process, the expression information is sent out after the delay. The delay is set according to the habit of the user, so that the normal communication of the user is not influenced, and enough reaction time is reserved for the user.
In specific embodiments, the duration of the delayed sending is set to 3 seconds, after the user inputs the shortcut sending gesture, a preview of the sending expression and countdown progress icons for 3 seconds appear on the terminal interface, and if the user finds that the sent expression is not the expression that the user wants to send, the sending process can be quickly stopped through a preset gesture.
As shown in fig. 6, the expression sending module includes an expression preview sub-module, a delay timer, a pause sub-module, and an expression sending sub-module.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (11)

1, method for quickly sending facial expression information, the method includes:
detecting an input operation of a user on an interface in a screen locking state and with a message to be previewed, and acquiring an operation characteristic corresponding to the input operation; the operation features comprise input tracks corresponding to the input operations, and the input tracks comprise non-closed tracks with specific shapes and closed tracks with specific shapes;
inquiring a preset gesture matched with the input operation according to the operation characteristics;
extracting expression information corresponding to the input operation according to the mapping relation between the preset gesture and the expression information;
sending the expression information;
the preset mapping relation between the gestures and the expression information is obtained through the following modes:
counting the use frequency of output expression information related to user ID and equipment input, if the use frequency of the expression information is higher than a preset value, associating a preset gesture for the expression, and establishing a gesture-expression mapping table based on statistics; and defining the mapping relation between the preset gesture and the expression information by using the gesture-expression mapping table based on statistics.
2. The method of claim 1, wherein obtaining the operation feature corresponding to the input operation comprises:
continuously acquiring an input position of the input operation and a time value corresponding to the input position;
and obtaining an input track corresponding to the input operation according to the input positions which are continuous in time sequence.
3. The method of claim 1, wherein querying the preset gesture matching the input operation according to the operation feature comprises:
judging whether a gesture matched with the input track of the operation characteristic exists in preset gestures; and if the preset gesture exists, determining the preset gesture as a gesture matched with the input operation, and if the preset gesture does not exist, judging that the input is invalid without a corresponding target gesture.
4. The method according to claim 1, further comprising a step of presetting a mapping relationship between the preset gesture and expression information, wherein the presetting of the mapping relationship between the preset gesture and expression information comprises:
establishing a standardized gesture-expression mapping table;
and defining the mapping relation between the preset gesture and the expression information by using the standardized gesture-expression mapping table.
5. The method of claim 1, further comprising:
and updating the mapping relation between the preset gesture and the expression information based on the input of the user.
6, kinds of device for quickly sending facial expression information, characterized in that, the device includes:
the device comprises a characteristic acquisition module, a display module and a display module, wherein the characteristic acquisition module is used for detecting input operation of a user on an interface which is in a screen locking state and has a message to be previewed and acquiring operation characteristics corresponding to the input operation; the operation features comprise input tracks corresponding to the input operations, and the input tracks comprise non-closed tracks with specific shapes and closed tracks with specific shapes;
the query matching module is used for querying a preset gesture matched with the input operation according to the operation characteristics;
the expression extraction module is used for extracting expression information corresponding to the input operation according to the mapping relation between the preset gesture and the expression information;
the expression sending module is used for sending the expression information;
the preset mapping relation between the gestures and the expression information is obtained through the following modes:
counting the use frequency of output expression information related to user ID and equipment input, if the use frequency of the expression information is higher than a preset value, associating a preset gesture for the expression, and establishing a gesture-expression mapping table based on statistics; and defining the mapping relation between the preset gesture and the expression information by using the gesture-expression mapping table based on statistics.
7. The apparatus of claim 6, wherein the feature obtaining module comprises:
the position recording module and the timer are used for continuously acquiring an input position of the input operation and a time value corresponding to the input position;
and the track calculation module is used for obtaining an input track corresponding to the input operation according to the input positions which are continuous in time sequence.
8. The apparatus of claim 6, wherein the query matching module comprises:
the matching judgment module is used for judging whether a gesture matched with the input track of the operation characteristic exists in preset gestures; and if the preset gesture exists, determining the preset gesture as a gesture matched with the input operation, and if the preset gesture does not exist, judging that the input is invalid without a corresponding target gesture.
9. The apparatus of claim 6, further comprising a presetting module, the presetting module comprising:
the standardization module is used for establishing a standardized gesture-expression mapping table;
, a mapping relation definition module for defining the mapping relation between the preset gesture and the expression information by using the standardized gesture-expression mapping table.
10. The apparatus of claim 6, further comprising an updating module for updating the mapping relationship between the preset gesture and the expression information based on user input.
Terminal of 11, , characterized in that it comprises the device of of claims 6-10.
CN201610796357.4A 2016-08-31 2016-08-31 method, device and terminal for quickly sending information Active CN106453823B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610796357.4A CN106453823B (en) 2016-08-31 2016-08-31 method, device and terminal for quickly sending information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610796357.4A CN106453823B (en) 2016-08-31 2016-08-31 method, device and terminal for quickly sending information

Publications (2)

Publication Number Publication Date
CN106453823A CN106453823A (en) 2017-02-22
CN106453823B true CN106453823B (en) 2020-01-31

Family

ID=58165480

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610796357.4A Active CN106453823B (en) 2016-08-31 2016-08-31 method, device and terminal for quickly sending information

Country Status (1)

Country Link
CN (1) CN106453823B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109165072A (en) * 2018-08-28 2019-01-08 珠海格力电器股份有限公司 Expression package generation method and device
CN111176545B (en) * 2019-12-30 2021-05-04 清华大学 Equipment control method, system, electronic equipment and storage medium
CN112596604A (en) * 2020-12-09 2021-04-02 Oppo广东移动通信有限公司 Message sending method, message receiving method, device and electronic equipment
CN112612362B (en) * 2020-12-17 2023-04-07 拉扎斯网络科技(上海)有限公司 Task execution method and device based on gesture interaction
CN114415824B (en) * 2021-12-13 2024-08-09 珠海格力电器股份有限公司 Intelligent bracelet control method and device, computer equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102999296A (en) * 2012-12-03 2013-03-27 北京百度网讯科技有限公司 Method and device for inputting texts into mobile terminal quickly and conveniently and mobile terminal
CN104635930A (en) * 2015-02-09 2015-05-20 联想(北京)有限公司 Information processing method and electronic device
CN104731439A (en) * 2013-12-19 2015-06-24 青岛海信移动通信技术股份有限公司 Gesture packaging and task executing method and device
CN105138222A (en) * 2015-08-26 2015-12-09 美国掌赢信息科技有限公司 Method for selecting expression icon and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102999296A (en) * 2012-12-03 2013-03-27 北京百度网讯科技有限公司 Method and device for inputting texts into mobile terminal quickly and conveniently and mobile terminal
CN104731439A (en) * 2013-12-19 2015-06-24 青岛海信移动通信技术股份有限公司 Gesture packaging and task executing method and device
CN104635930A (en) * 2015-02-09 2015-05-20 联想(北京)有限公司 Information processing method and electronic device
CN105138222A (en) * 2015-08-26 2015-12-09 美国掌赢信息科技有限公司 Method for selecting expression icon and electronic equipment

Also Published As

Publication number Publication date
CN106453823A (en) 2017-02-22

Similar Documents

Publication Publication Date Title
CN106453823B (en) method, device and terminal for quickly sending information
US10923118B2 (en) Speech recognition based audio input and editing method and terminal device
CN109726367B (en) Comment display method and related device
CN102780653B (en) Quick method, client and the system communicated in instant messaging
KR101448336B1 (en) A method of service extension using message input window included in chatting window providing instant messaging service
EP3014408A1 (en) Showing interactions as they occur on a whiteboard
CN110321044A (en) Sharing files method and terminal
WO2011038669A1 (en) Determining object method, object display method, object switch method and electron device
CN111490927B (en) Method, device and equipment for displaying message
KR20130111453A (en) Touch-based method and apparatus for sending information
CN102655544A (en) Method for issuing communication, and communication terminal
CN106341481A (en) Processing method and device of information push and equipment
CN103596028A (en) Method and device for controlling smart television
WO2018040547A1 (en) Method and device for creating group
CN113127432B (en) Operation execution method, device, electronic equipment and medium
CN105389113A (en) Gesture-based application control method and apparatus and terminal
CN104662507A (en) Searching at a user device
KR20150023151A (en) Electronic device and method for executing application thereof
CN103294351B (en) A kind of display packing and electronic equipment
CN104679506B (en) A kind of terminal
CN105491237A (en) Contact information display method and terminal
CN103150083B (en) A kind of display packing of self-defined desktop icons and device
CN112818094A (en) Chat content processing method and device and electronic equipment
CN109922199B (en) Contact information processing method and terminal
CN105450507A (en) Method and device for sharing information in social network

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant