CN106649712A - Method and device for inputting expression information - Google Patents
Method and device for inputting expression information Download PDFInfo
- Publication number
- CN106649712A CN106649712A CN201611188433.XA CN201611188433A CN106649712A CN 106649712 A CN106649712 A CN 106649712A CN 201611188433 A CN201611188433 A CN 201611188433A CN 106649712 A CN106649712 A CN 106649712A
- Authority
- CN
- China
- Prior art keywords
- information
- target
- image
- expression
- image information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/5838—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/5846—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using extracted text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9535—Search customisation based on user profiles and personalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9538—Presentation of query results
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/02—Input arrangements using manually operated switches, e.g. using keyboards or dials
- G06F3/023—Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
- G06F3/0233—Character input methods
- G06F3/0237—Character input methods using prediction or retrieval techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
- G06F3/0412—Digitisers structurally integrated in a display
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
Abstract
The invention discloses a method and device for inputting expression information and relates to the field of social applications. The method includes the steps that target feature information of an inputter is acquired, wherein the target feature information comprises at least one of facial feature information and limb feature information; target expression information corresponding to the target feature information is acquired; the target expression information is input. Thus, the situation that a large amount of search time is consumed when the expression information is input in correlation techniques is avoided, and the technical problem that the expression information input efficiency is low is solved.
Description
Technical field
It relates to social networking application field, more particularly to a kind of method and device of input expression information.
Background technology
As the utilization rate of social chat software is improved constantly, terminal provides substantial amounts of emoticon and selects for user, uses
Family can select appropriate emoticon to carry out lively earth's surface and reach user's mood at that time in chat.
The content of the invention
To overcome problem present in correlation technique, the disclosure to provide a kind of method and device of input expression information.
According to the first aspect of the embodiment of the present disclosure, there is provided a kind of method of input expression information, including:Obtain importer
Target signature information, the target signature information include it is following at least one:Face feature information, limbs characteristic information;Obtain
Take the corresponding target expression information of the target signature information;It is input into the target expression information.
Alternatively, the target signature information for obtaining importer, including:Obtain target information, the target information bag
Include it is following at least one:First image information, audio-frequency information;The target signature information is obtained from the target information.
Alternatively, the acquisition target information, including:The target information is gathered by collecting device;Or, obtain institute
State the target information that importer selectes from local data base.
Alternatively, methods described also includes:Target database is obtained, the target database includes indicating the spy of importer
Reference ceases and the corresponding relation between expression information;It is described to obtain the corresponding target expression packet of the target signature information
Include:The corresponding target expression information of the target signature information is obtained according to the target database.
Alternatively, the target expression packet includes following any one:Emotion icons information, emoticon information, second
Image information;Wherein, second image information is according to described first image acquisition of information.
Alternatively, methods described also includes:Determine whether the target database includes the target signature information and institute
State the corresponding relation between target expression information;It is described to obtain the corresponding target expression information of the target signature information, including:
When the target database does not include the corresponding relation between the target signature information and the target expression information, by institute
The first image information is stated as second image information, the target expression information is obtained;Or, described first image is believed
Breath carries out process and obtains second image information, and using second image information as the target expression information.
Alternatively, it is described process is carried out to described first image information to obtain second image information, and by described
Two image informations are included as target expression packet:Obtain the model image that the importer selectes;By first figure
The second image information is obtained as information and the model image carry out synthesis, using second image information as the object table
Feelings information.
Alternatively, it is described described first image information and the model image are carried out into synthesis to obtain the second image information bag
Include:Extract the characteristic information of the user in described first image information;The characteristic information of the user is added to the model
The image-region that the importer selectes on image.
Alternatively, it is described process is carried out to described first image information to obtain second image information, and by described
Two image informations are included as target expression packet:Obtain the image parameter of described first image information;By described image
Parameter adjustment to the target component that the importer determines obtains second image information, and second image information is made
For the target expression information.
According to the second aspect of the embodiment of the present disclosure, there is provided a kind of device of input expression information, described device includes:The
One acquisition module, be configured to obtain importer target signature information, the target signature information include it is following at least one:
Face feature information, limbs characteristic information;Second acquisition module, is configured to obtain the corresponding target of the target signature information
Expression information;Input module, is configured as input to the target expression information.
Alternatively, first acquisition module includes:First acquisition submodule, is configured to obtain target information, described
Target information include it is following at least one:First image information, audio-frequency information;Second acquisition submodule, is configured to from described
The target signature information is obtained in target information.
Alternatively, first acquisition submodule, is configured to collecting device and gathers the target information;Or,
Obtain the target information that the importer selectes from local data base.
Alternatively, described device also includes:3rd acquisition module, is configured to obtain target database, the number of targets
Include indicating the corresponding relation between the characteristic information and expression information of importer according to storehouse;Second acquisition module, is configured
It is that the corresponding target expression information of the target signature information is obtained according to the target database.
Alternatively, the target expression packet includes following any one:Emotion icons information, emoticon information, second
Image information;Wherein, second image information is according to described first image acquisition of information.
Alternatively, described device also includes:Determining module, is configured to determine that whether the target database includes described
Corresponding relation between target signature information and the target expression information;Second acquisition module, is configured as described
When target database does not include the corresponding relation between the target signature information and the target expression information, by described first
Image information obtains the target expression information as second image information;Or, described first image information is carried out
Process obtains second image information, and using second image information as the target expression information.
Alternatively, second acquisition module, is configured to obtain the model image that the importer selectes;By described
One image information carries out synthesis and obtains the second image information with the model image, using second image information as the mesh
Mark expression information.
Alternatively, second acquisition module, is configured to extract the feature letter of the user in described first image information
Breath;The characteristic information of the user is added into the image-region that the importer selectes to the model image.
Alternatively, second acquisition module, is configured to obtain the image parameter of described first image information;Will be described
Image parameter adjusts the target component determined to the importer and obtains second image information, and second image is believed
Breath is used as the target expression information.
According to the third aspect of the embodiment of the present disclosure, there is provided a kind of device of input expression information, it is characterised in that bag
Include:Processor;For storing the memorizer of processor executable;Wherein, the processor is configured to:Obtain importer
Target signature information, the target signature information include it is following at least one:Face feature information, limbs characteristic information;Obtain
Take the corresponding target expression information of the target signature information;It is input into the target expression information.
According to the fourth aspect of the embodiment of the present disclosure, there is provided a kind of non-transitorycomputer readable storage medium, when described
Instruction in storage medium by mobile terminal computing device when so that mobile terminal is able to carry out a kind of input expression information
Method, methods described includes:The target signature information of importer is obtained, the target signature information includes following at least one
:Face feature information, limbs characteristic information;Obtain the corresponding target expression information of the target signature information;Input is described
Target expression information.
The technical scheme that embodiment of the disclosure is provided can include following beneficial effect:Obtain the target characteristic of importer
Information, the target signature information include it is following at least one:Face feature information, limbs characteristic information;Obtain the target
The corresponding target expression information of characteristic information;Be input into the target expression information, so as to solve correlation technique in expression information it is defeated
Enter the low technical problem of efficiency.
It should be appreciated that the general description of the above and detailed description hereinafter are only exemplary and explanatory, not
The disclosure can be limited.
Description of the drawings
Accompanying drawing herein is merged in description and constitutes the part of this specification, shows the enforcement for meeting the disclosure
Example, and be used to explain the principle of the disclosure together with description.
Fig. 1 is a kind of flow chart of the method for the input expression information according to an exemplary embodiment;
Fig. 2 is the flow chart of the method for another kind of input expression information according to an exemplary embodiment;
Fig. 3 is the flow chart of the method for another the input expression information according to an exemplary embodiment;
Fig. 4 is the block diagram of the device of the first the input expression information according to an exemplary embodiment;
Fig. 5 is the block diagram of the device of second input expression information according to an exemplary embodiment;
Fig. 6 is the block diagram of the device of the third input expression information according to an exemplary embodiment;
Fig. 7 is the block diagram of the device of the 4th kind of input expression information according to an exemplary embodiment;
Fig. 8 is the block diagram of the device of the 5th kind of input expression information according to an exemplary embodiment.
Specific embodiment
Here exemplary embodiment will be illustrated in detail, its example is illustrated in the accompanying drawings.Explained below is related to
During accompanying drawing, unless otherwise indicated, the same numbers in different accompanying drawings represent same or analogous key element.Following exemplary embodiment
Described in embodiment do not represent all embodiments consistent with the disclosure.Conversely, they be only with it is such as appended
The example of the consistent apparatus and method of some aspects described in detail in claims, the disclosure.
The disclosure can apply to the scene of input information, for example, speech is chatted or delivered by terminal (such as mobile phone)
By etc. need the scene of user input information, under the scene, user often vivo represents use by input expression information
The current personage's mood in family, is such as input into the expression information of a smiling face, to represent that the current mood of user is glad, is and for example input into
One expression information shed tears, to represent that the current mood of user is sad etc., in correlation technique, terminal prestores big scale
Feelings information, when input meets the emoticon of user's current mood, needs are searched one by one in the expression information enumerated, and this is looked into
Process is looked for take considerable time, so as to reduce the efficiency of input information.
In order to solve the above problems, the disclosure provides a kind of method and apparatus of input expression information, and the method is to pass through
Obtain importer target signature information, the target signature information include it is following at least one:Face feature information, limbs feature
Information;Obtain the corresponding target expression information of the target signature information;The target expression information is input into, so, it is to avoid correlation
Technology expends the substantial amounts of lookup time when expression information is input into, and asks so as to solve the low technology of input expression information efficiency
Topic.
The disclosure is described in detail below by specific embodiment.
Fig. 1 is a kind of flow chart of the method for the input expression information according to an exemplary embodiment, such as Fig. 1 institutes
Show, in being applied to terminal, comprise the following steps:
In a step 101, the target signature information of importer is obtained.
Wherein, the target signature information include it is following at least one:Face feature information, limbs characteristic information.
In this step, target information can be obtained, and target signature information is obtained from the target information, the target information
Including it is following at least one:First image information, audio-frequency information.
In a step 102, the corresponding target expression information of the target signature information is obtained.
Wherein, the target expression packet includes following any one:Emotion icons information, emoticon information, the second image
Information, the emotion icons information can be expression static images, or expression dynamic picture;The emoticon information can be with
It is face word, the face word is the pattern for representing expression consisted of punctuation mark or English alphabet;Second figure
As information is that, according to first image information acquisition, above-mentioned example is merely illustrative, and the disclosure is not construed as limiting to this.
In step 103, it is input into the target expression information.
In this step, the target expression information can be input in input area, the input area can be input frame, should
Input frame is used to be input into expression information or text message, in target expression information input to the input frame, can be by the mesh
Mark expression information is transmitted, and for example, under chat scenario, the target expression information can be sent to other side;Browsing net
Under the scene of page (such as Semen setariae forum), related news or model can be delivered with the target expression information of personal view;More
Under the scene of new individual homepage (such as wechat circle of friends or microblogging), the target expression information can be uploaded, above-mentioned example is
Illustrate, the disclosure is not construed as limiting to this.
Using said method, by the target signature information for obtaining importer, the target signature information include it is following at least
One:Face feature information, limbs characteristic information;Obtain the corresponding target expression information of the target signature information;It is input into the mesh
Mark expression information, so, it is to avoid correlation technique expends the substantial amounts of lookup time when expression information is input into, so as to solve table
The low technical problem of feelings information input efficiency.
Fig. 2 is a kind of flow chart of the method for the input expression information according to an exemplary embodiment, such as Fig. 2 institutes
Show, the target information in the present embodiment is illustrated by taking the first image information as an example, and the method is comprised the following steps:
In step 201, the first image information is obtained.
In this step, the first image information can be obtained by following two modes:By collecting device gather this
One image information;Or, obtain the first image information that importer selectes from local data base.
Illustratively, user clicks on expression enter key, now, terminal when needing to be input into expression information on input keyboard
Photographic head is called, the facial image information or limbs image information (i.e. the first image information) of user is caught;Or terminal is obtained
Take selected (equivalent to the local data base) facial image information from photograph album of importer or limbs image information (i.e. the first figure
As information), wherein, the facial image information can include the images such as form or the position of face organ, the figure such as made faces
Picture, the limbs image information can include the image of action on limbs, and the image such as thumbed up, above-mentioned example is simply illustrated
Illustrate, the disclosure is not construed as limiting to this.
In step 202., target signature information is obtained from first image information.
Wherein, the target signature information include it is following at least one:Face feature information, limbs characteristic information, illustratively,
When the first image information for getting is facial image information, can pass through to obtain the form of face organ and the position in face
The information such as put, terminal can extract the target signature information according to the change that face organ occurs, for example, the change of face organ
Including the metamorphosis and change in location of the organs such as eyebrow, eyes, eyelid, mouth, nose, such as curved under eyebrow, the corners of the mouth is drop-down, brows
Wrinkle together, eyes are opened wide, and nose is heaved, cheek is lifted wait change;When the first image information for getting is limbs image letter
During breath, the target signature information can be including the action on limbs (by moving that the human bodies such as handss, elbow, arm, hip, foot are made
Make), illustratively, expression anxiety of rubbing hands, expression pain of beating one's breast, expression of hanging one's head is dejected, and expression of stamping foot is angry, and above-mentioned example is simply lifted
Example explanation, the disclosure is not construed as limiting to this.
In step 203, target database is obtained.
Wherein, the target database includes the corresponding relation between the characteristic information and expression information of instruction importer, should
Expression information can be a large amount of expression models (such as happy, sad, frightened, detest expression model) for prestoring, this feature
Information can include face feature information and limbs characteristic information, and the method for obtaining this feature information refers to step 201, and here is not
Repeated.For example, a face-image model, and the face extracted from the face-image model are collected by photographic head
Portion's characteristic information is smiling face, then be that smiling face sets up corresponding relation with the expression information for representing smiling face by face feature information;And for example,
One limbs iconic model, and the limbs characteristic information extracted from the limbs iconic model are collected to beat by photographic head
Breast, then be to beat one's breast set up corresponding relation with the expression information for representing pain by limbs characteristic information;For another example, by importer from photograph album
In a face-image model selecting, and the face feature information extracted from the face-image model is to put out one's tongue, then
It is to put out one's tongue set up corresponding relation with naughty expression information is represented by face feature information, so, can be with subsequent step
The target signature information of acquisition is matched with the characteristic information stored in the target database, so as to obtain target expression letter
Breath.
In step 204, determine whether the target database includes between target signature information and target expression information
Corresponding relation.
In this step, can determine whether the target database includes target by any one in following two modes
Corresponding relation between characteristic information and target expression information:
Mode one:The matching degree of the target signature information and the characteristic information of storage in the target database is obtained respectively,
And when the matching degree is more than or equal to predetermined threshold value, determine that the corresponding characteristic information of the matching degree is matching characteristic information,
The corresponding expression information of matching characteristic information is target expression information, it is determined that the target database includes the target characteristic letter
Corresponding relation between breath and the target expression information;When the matching degree is less than predetermined threshold value, it is determined that the target database
The corresponding relation between the target signature information and the target expression information is not included.
Mode two:The matching degree of the target signature information and the characteristic information of storage in the target database is obtained respectively,
The matching degree to obtaining is ranked up according to order from big to small and obtains maximum match degree, be more than in the maximum match degree or
When person is equal to predetermined threshold value, the corresponding characteristic information of maximum match degree is determined for matching characteristic information, the matching characteristic information
Corresponding expression information is target expression information, it is determined that the target database includes the target signature information with the target expression
Corresponding relation between information;When the maximum match degree is less than predetermined threshold value, it is determined that the target database does not include the mesh
Corresponding relation between mark characteristic information and the target expression information.
Seen from the above description, mode one is to be compared the matching degree of acquisition with predetermined threshold value successively, in matching degree
During more than or equal to predetermined threshold value, the corresponding characteristic information of the matching degree is determined for matching characteristic information, the matching characteristic letter
It is target expression information to cease corresponding expression information, so, when there are multiple matching degrees more than or equal to predetermined threshold value, then
Multiple target expression informations can be got, so as to user further can select the table of needs from multiple target expression informations
Feelings information;Second mode is after whole matching degrees are obtained, maximum match degree to be selected from the whole matching degrees for obtaining, and should
Maximum match degree is compared with predetermined threshold value, when the maximum match degree is more than or equal to predetermined threshold value, determines the maximum
The corresponding characteristic information of matching degree is matching characteristic information, and the corresponding expression information of matching characteristic information is target expression letter
Breath.
When it is determined that the target database includes the corresponding relation between the target signature information and the target expression information,
Execution step 205;
It is determined that the target database does not include the corresponding relation between the target signature information and the target expression information
When, execution step 206.
In step 205, it is input into the target expression information.
Wherein, the target expression packet includes following any one:Emotion icons information, emoticon information, the expression figure
Mark information can be expression static images, or expression dynamic picture;The emoticon information can be face word, the face
Word is the pattern for representing expression consisted of punctuation mark or English alphabet, and above-mentioned example is merely illustrative,
The disclosure is not construed as limiting to this.
In this step, the target can be expressed one's feelings information input to input frame by terminal, and the input frame is used to be input into expression
Information or text message, are input into input frame in the expression information, can be transmitted the expression information, for example, are chatting
Under its scene, the expression information can be sent to other side;Under the scene for browsing webpage (such as Semen setariae forum), can be to correlation
News or model deliver the expression information of personal view;In the scene for updating personal homepage (such as wechat circle of friends or microblogging)
Under, the expression information can be uploaded.
If it should be noted that when there are multiple matching degrees in step 204 more than or equal to predetermined threshold value, can be with
Multiple target expression informations are got, now, terminal cannot be asked it is determined which target expression information be input into solve this
Topic, in another embodiment of the disclosure, terminal can all show the multiple target expression informations for obtaining by display frames
User, so that user is selected, and after the target expression information needed for user determines, terminal will be input into the mesh that user selects
Mark expression information;In another embodiment of the disclosure, terminal can also all be input into the target complete expression information for obtaining to defeated
In entering frame, certainly, in order to further improve the interactivity of user and terminal, terminal can also be passed through by user in the present embodiment
The target complete expression information of input to display frames is deleted, to obtain accurate target expression information, and is sent out
Go, above-mentioned example is merely illustrative, the disclosure is not construed as limiting to this.
In step 206, process is carried out to first image information and obtains the second image information, and second image is believed
Breath is used as target expression information.
In a kind of possible implementation, can be by any one in following two modes to first image information
Carry out process and obtain second image information, and using second image information as target expression information:
Mode one:Obtain the model image that the importer selectes;First image information and the model image are closed
Into the second image information is obtained, using second image information as the target expression information.Specifically, the first image letter is extracted
The characteristic information of the user in breath, by the characteristic information of the user image district that the importer selectes to the model image is added
Domain, wherein, the model image can be the image template for pre-setting, and user can add the spy of user in the image template
Reference ceases.For example, when model image is a kitten for lacking eyes and face, the user characteristicses for extracting are beep mouth, blink
During eyeball, then by the user characteristicses for beep mouth and blinking be separately positioned on kitten it is facial face and the corresponding position of eyes;Again
Such as, when model image is the absence of the BAINIANGZI of eyebrow and face, and the user characteristicses for extracting are curved under eyebrow, the corners of the mouth raises up,
Then the user characteristicses are raised up for the curved, corners of the mouth under eyebrow and is arranged on eyebrow and the corresponding position of face of BAINIANGZI face;For another example,
Then it is double by the user characteristicses when user characteristicses for extracting are that upper lower limb takeoffs when model image is the absence of the Donald duck of leg
Lower limb takeoffs and is arranged on the corresponding position in leg of Donald duck, and above-mentioned example is merely illustrative, and the disclosure is not construed as limiting to this.
Mode two:The image parameter of first image information is obtained, the image parameter is adjusted to importer's determination
Target component obtains second image information, and using second image information as the target expression information, wherein, the image ginseng
Number can include the color of image, either the parameter such as position of the size of face or face in image.For example, when for obtaining
The size of the image parameter in one image information including eyes and the color of lip, terminal can adjust the size of the eyes with
And the color of lip obtains the second image information, and using second image information as the target expression information;And for example, acquisition is worked as
The first image information in image parameter include the colour of skin, shape of face, terminal can adjust the colour of skin and shape of face obtains second figure
As information, and using second image information as the target expression information;For another example, when the figure in the first image information for getting
As parameter includes that color of image is colour, the Image Adjusting to black and white can be obtained the second image information by terminal, and should
Second image information is merely illustrative as the target expression information, above-mentioned example, and the disclosure is not construed as limiting to this.
The process of image is operated in order to further reduce importer, in the mode of alternatively possible realization, can be with
Using first image information as second image information, the target expression information is obtained, illustratively, when getting first figure
When as information being the image for waving bye bye, then the image that directly this can be waved bye bye as target expression information, so as to carry
The high experience of user.
It is determined that after the target expression information, execution step 205.
Using said method, by the target signature information for obtaining importer, the target signature information include it is following at least
One:Face feature information, limbs characteristic information;Obtain the corresponding target expression information of the target signature information;It is input into the mesh
Mark expression information, so, it is to avoid correlation technique expends the substantial amounts of lookup time when expression information is input into, so as to solve table
The low technical problem of feelings information input efficiency.
Fig. 3 is a kind of flow chart of the method for the input expression information according to an exemplary embodiment, such as Fig. 3 institutes
Show, illustrate so that target information is as audio-frequency information as an example in the present embodiment, the method may comprise steps of:
In step 301, audio-frequency information is obtained.
In this step, audio-frequency information can be obtained by following two modes:The audio frequency letter is gathered by collecting device
Breath;Or, obtain the audio-frequency information that importer selectes from local data base.
Illustratively, user clicks on expression enter key, now, terminal when needing to be input into expression information on input keyboard
The audio-frequency information of user is gathered by microphone;Or, terminal obtains importer from music libraries or dictation library (equivalent to local number
According to storehouse) in select audio-frequency information.
In step 302, target signature information is obtained from audio-frequency information.
Wherein, the target signature information include it is following at least one:Face feature information, limbs characteristic information.
In a kind of mode in the cards, terminal is converted to audio-frequency information after text message, extracts from text message
Text feature, this article eigen can include various word (such as happy, sad, angry, the terrified words with emotion
Language), and used in end of the sentence represent the tone auxiliary word (such as eh, oh, sound of crying or vomiting, sound of sighing, toot, auxiliary words of mood), can be with from audio frequency
, to including speech parameters such as tone, loudness, tone colors, so, terminal can be according to this article eigen and/or the voice for information retrieval
Parameter obtains target signature information, and when such as the text is characterized as " heartily ", then the target signature information is (i.e. facial characteristics of smiling
Information);When and for example the text is characterized as " yeah ", then the target signature information is shears handss (i.e. limbs characteristic information), above-mentioned
Example is merely illustrative, and the disclosure is not construed as limiting to this.
In step 303, target database is obtained.
Wherein, the target database includes the corresponding relation between the characteristic information and expression information of instruction importer, should
Expression information is a large amount of expression models (such as happy, sad, frightened, detest expression model) for prestoring, and can be led in advance
Cross the audio-frequency information model of microphone acquisition importer or audio-frequency information model is selected from local data base by importer, and will
The audio-frequency information model conversion is text message model, and text feature is extracted from text message model (as having emotion color
Color word and represent the auxiliary word of the tone used in end of the sentence), so as to set up this article eigen, (i.e. face is special with default characteristic information
Reference cease and limbs characteristic information) between corresponding relation, it is also possible to tone, sound are directly obtained according to audio-frequency information model
The speech parameters such as degree, tone color, and the corresponding relation set up between the speech parameter and default characteristic information.
For example, audio-frequency information model is collected by microphone, and is text message model by the audio-frequency information model conversion
Afterwards, the text features such as happiness, happy or happiness are extracted from text message model, then this article eigen is happy with expression
Face feature information or limbs characteristic information set up corresponding relation, and by the face feature information or the limbs characteristic information
Corresponding relation is set up with the expression information for representing smiling face;And for example, audio-frequency information model is collected by microphone, and the audio frequency is believed
Breath model conversion is after text message model, the text feature such as sad, sad or sad to be extracted from text message model,
Then this article eigen is set up into corresponding relation with sad face feature information or limbs characteristic information is represented, and by the face
Characteristic information or limbs characteristic information expression information sad with expression sets up corresponding relation;For another example, gathered by microphone
To audio-frequency information model, and extract including speech parameters such as tone, loudness and tone colors from the audio-frequency information model, then should
Speech parameter and corresponding face feature information or limbs characteristic information set up corresponding relation, and by the face feature information
Or the limbs characteristic information sets up corresponding relation with corresponding expression information, so, the target that will be obtained in subsequent step
Characteristic information is carried out matching and obtains target expression information with default characteristic information in the target database.
In step 304, the corresponding target expression information of the target signature information is obtained according to the target database.
Wherein it is possible to obtain the corresponding target expression letter of the target signature information by any one in following two modes
Breath:
Mode one:The matching degree of the target signature information and default characteristic information in the target database is obtained respectively, and
When the matching degree is more than or equal to predetermined threshold value, the corresponding default characteristic information of the matching degree is determined for the default feature of target
Information, the corresponding expression information of target default characteristic information is target expression information.
Mode two:The matching degree of the target signature information and default characteristic information in the target database is obtained respectively, it is right
The matching degree for obtaining is ranked up according to order from big to small and obtains maximum match degree, be more than in the maximum match degree or
During equal to predetermined threshold value, determine that the corresponding default characteristic information of maximum match degree is target default characteristic information, the target is pre-
If the corresponding expression information of characteristic information is target expression information.
Seen from the above description, mode one is to be compared the matching degree of acquisition with predetermined threshold value successively, in matching degree
During more than or equal to predetermined threshold value, the corresponding default characteristic information of the matching degree is determined for target default characteristic information, the mesh
The corresponding expression information of mark default characteristic information is target expression information, so, is more than or equal to when there are multiple matching degrees
During predetermined threshold value, then multiple target expression informations can be got;Second mode is after whole matching degrees are obtained, from what is obtained
Maximum match degree is selected in whole matching degrees, and the maximum match degree is compared with predetermined threshold value, in the maximum match degree
During more than or equal to predetermined threshold value, determine that the corresponding default characteristic information of maximum match degree is target default characteristic information,
The corresponding expression information of target default characteristic information is target expression information.
In addition, if the corresponding target expression information of the target signature information cannot be obtained according to the target database,
Terminal can show information to user by prompting frame, and to remind user to re-enter audio-frequency information, the information can
So that including text message, such as " it fails to match for expression, please re-enter ", the information can also be and show in a voice form
To user, the sound can be configured in advance, illustratively, it can be provided one section of voice, such as send the sound of " input failure "
Sound, or one section of music, can also be the information such as prompt tone, and the disclosure is arranged to specific sound and is not construed as limiting, certainly, should
Information can also be pointed out by the breath light of terminal or flash lamp, for example, by breath light or flash lamp
Glow frequency, or the mode such as the color for passing through breath light pointed out.
In step 305, it is input into the target expression information.
Wherein, the target expression packet includes following any one:Emotion icons information, emoticon information, the expression figure
Mark information can be expression static images, or expression dynamic picture;The emoticon information can be face word, the face
Word is the pattern for representing expression consisted of punctuation mark or English alphabet, and above-mentioned example is merely illustrative,
The disclosure is not construed as limiting to this.
In this step, the target can be expressed one's feelings information input to input frame by terminal, and the input frame is used to be input into expression
Information or text message, are input into input frame in the expression information, can be transmitted the expression information, for example, are chatting
Under its scene, the expression information can be sent to other side;Under the scene for browsing webpage (such as Semen setariae forum), can be to correlation
News or model deliver the expression information of personal view;In the scene for updating personal homepage (such as wechat circle of friends or microblogging)
Under, the expression information can be uploaded.
If it should be noted that when there are multiple matching degrees in step 304 more than or equal to predetermined threshold value, can be with
Multiple target expression informations are got, now, terminal cannot be asked it is determined which target expression information be input into solve this
Topic, in another embodiment of the disclosure, terminal can all show the multiple target expression informations for obtaining by display frames
User, so that user is selected, and after the target expression information needed for user determines, terminal will be input into the mesh that user selects
Mark expression information;In another embodiment of the disclosure, terminal can also all be input into the target complete expression information for obtaining to defeated
In entering frame, certainly, in order to further improve the interactivity of user and terminal, terminal can also be passed through by user in the present embodiment
The target complete expression information of input to display frames is deleted, to obtain accurate target expression information, and is sent out
Go.
Using said method, by the target signature information for obtaining importer, the target signature information include it is following at least
One:Face feature information, limbs characteristic information;Obtain the corresponding target expression information of the target signature information;It is input into the mesh
Mark expression information, so, it is to avoid correlation technique expends the substantial amounts of lookup time when expression information is input into, so as to solve table
The low technical problem of feelings information input efficiency.
Fig. 4 is a kind of block diagram of the device of the input expression information according to an exemplary embodiment, with reference to Fig. 4, should
Device includes the first acquisition module 401, the second acquisition module 402 and input module 403.
First acquisition module 401, is configured to obtain the target signature information of importer, and the target signature information includes
Below at least one:Face feature information, limbs characteristic information;
Second acquisition module 402, is configured to obtain the corresponding target expression information of the target signature information;
The input module 403, is configured as input to the target expression information.
Alternatively, Fig. 5 is a kind of block diagram of the device of the input expression information shown in embodiment illustrated in fig. 4, and this first is obtained
Delivery block 401 includes:
First acquisition submodule 4011, be configured to obtain target information, the target information include with down to
One item missing:First image information, audio-frequency information;
Second acquisition submodule 4012, is configured to from the target information obtain the target signature information.
Alternatively, first acquisition submodule 4011, is configured to collecting device and gathers the target information;Or,
Obtain the target information that the importer selectes from local data base.
Alternatively, Fig. 6 is a kind of block diagram of the device of the input expression information shown in embodiment illustrated in fig. 4, and the device is also
Including:
3rd acquisition module 404, is configured to obtain target database, and the target database includes indicating the spy of importer
Reference ceases and the corresponding relation between expression information;
Second acquisition module 402, be configured to according to the target database obtain the target signature information it is corresponding should
Target expression information.
Alternatively, the target expression packet includes following any one:Emotion icons information, emoticon information, the second figure
As information;Wherein, second image information is according to first image information acquisition.
Alternatively, Fig. 7 is a kind of block diagram of the device of the input expression information shown in embodiment illustrated in fig. 6, and the device is also
Including:
Determining module 405, is configured to determine that whether the target database includes the target signature information and the object table
Corresponding relation between feelings information;
Second acquisition module 402, being configured as the target database does not include the target signature information and the target
During corresponding relation between expression information, using first image information as second image information, target expression letter is obtained
Breath;Or, process is carried out to first image information and obtains second image information, and using second image information as described
Target expression information.
Alternatively, second acquisition module 402, is configured to obtain the model image that the importer selectes;By this first
Image information carries out synthesis and obtains the second image information with the model image, using second image information as the target expression letter
Breath.
Alternatively, second acquisition module 402, is configured to extract the feature letter of the user in first image information
Breath;The characteristic information of the user is added into the image-region that the importer selectes to the model image.
Alternatively, second acquisition module 402, is configured to obtain the image parameter of first image information:By the figure
As parameter adjustment obtains second image information to the target component that the importer determines, and using second image information as this
Target expression information.
Using said apparatus, by the target signature information for obtaining importer, the target signature information include it is following at least
One:Face feature information, limbs characteristic information;Obtain the corresponding target expression information of the target signature information;It is input into the mesh
Mark expression information, it is to avoid correlation technique expends the substantial amounts of lookup time when expression information is input into, so as to solve expression letter
The low technical problem of breath input efficiency.
With regard to the device in above-described embodiment, wherein modules perform the concrete mode of operation in relevant the method
Embodiment in be described in detail, explanation will be not set forth in detail herein.
Fig. 8 is a kind of block diagram for being input into the device 800 of expression information according to an exemplary embodiment.Example
Such as, device 800 can be mobile phone, and computer, digital broadcast terminal, messaging devices, game console, flat board sets
It is standby, armarium, body-building equipment, personal digital assistant etc..
With reference to Fig. 8, device 800 can include following one or more assemblies:Process assembly 802, memorizer 804, electric power
Component 806, multimedia groupware 808, audio-frequency assembly 810, the interface 812 of input/output (I/O), sensor cluster 814, and
Communication component 816.
The integrated operation of the usual control device 800 of process assembly 802, such as with display, call, data communication, phase
Machine operates and records the associated operation of operation.Process assembly 802 can refer to including one or more processors 820 to perform
Order, to complete all or part of step of the method for above-mentioned input expression information.Additionally, process assembly 802 can include one
Individual or multiple modules, the interaction being easy between process assembly 802 and other assemblies.For example, process assembly 802 can include many matchmakers
Module, to facilitate the interaction between multimedia groupware 808 and process assembly 802.
Memorizer 804 is configured to store various types of data to support the operation in device 800.These data are shown
Example includes the instruction of any application program for operating on device 800 or method, and contact data, telephone book data disappears
Breath, picture, video etc..Memorizer 804 can be by any kind of volatibility or non-volatile memory device or their group
Close and realize, such as static RAM (SRAM), Electrically Erasable Read Only Memory (EEPROM) is erasable to compile
Journey read only memory (EPROM), programmable read only memory (PROM), read only memory (ROM), magnetic memory, flash
Device, disk or CD.
Electric power assembly 806 provides electric power for the various assemblies of device 800.Electric power assembly 806 can include power management system
System, one or more power supplys, and other generate, manage and distribute the component that electric power is associated with for device 800.
Multimedia groupware 808 is included in the screen of one output interface of offer between the device 800 and user.At some
In embodiment, screen can include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel, screen
Touch screen is may be implemented as, to receive the input signal from user.Touch panel includes one or more touch sensors
With the gesture on sensing touch, slip and touch panel.The touch sensor can not only sensing touch or sliding action side
Boundary, but also the detection persistent period related to the touch or slide and pressure.In certain embodiments, multimedia groupware
808 include a front-facing camera and/or post-positioned pick-up head.When device 800 is in operator scheme, such as screening-mode or video screen module
During formula, front-facing camera and/or post-positioned pick-up head can receive outside multi-medium data.Each front-facing camera and rearmounted take the photograph
As head can be a fixed optical lens system or with focusing and optical zoom capabilities.
Audio-frequency assembly 810 is configured to output and/or input audio signal.For example, audio-frequency assembly 810 includes a Mike
Wind (MIC), when device 800 is in operator scheme, such as call model, logging mode and speech recognition mode, mike is matched somebody with somebody
It is set to reception external audio signal.The audio signal for being received can be further stored in memorizer 804 or via communication set
Part 816 sends.In certain embodiments, audio-frequency assembly 810 also includes a speaker, for exports audio signal.
, to provide interface between process assembly 802 and peripheral interface module, above-mentioned peripheral interface module can for I/O interfaces 812
To be keyboard, click wheel, button etc..These buttons may include but be not limited to:Home button, volume button, start button and lock
Determine button.
Sensor cluster 814 includes one or more sensors, and the state for providing various aspects for device 800 is commented
Estimate.For example, sensor cluster 814 can detect the opening/closed mode of device 800, such as relative localization of component, the group
Part is the display and keypad of device 800, and sensor cluster 814 can be with 800 1 components of detection means 800 or device
Position change, user is presence or absence of with what device 800 was contacted, the temperature of the orientation of device 800 or acceleration/deceleration and device 800
Degree change.Sensor cluster 814 can include proximity transducer, be configured to detect attached when without any physical contact
The presence of nearly object.Sensor cluster 814 can also include optical sensor, such as CMOS or ccd image sensor, in imaging
Using used in.In certain embodiments, the sensor cluster 1214 can also include acceleration transducer, gyro sensors
Device, Magnetic Sensor, pressure transducer or temperature sensor.
Communication component 816 is configured to facilitate the communication of wired or wireless way between device 800 and other equipment.Device
800 can access based on the wireless network of communication standard, such as WiFi, 2G or 3G, or combinations thereof.In an exemplary enforcement
In example, communication component 816 receives the broadcast singal or broadcast related information from external broadcasting management system via broadcast channel.
In one exemplary embodiment, the communication component 816 also includes near-field communication (NFC) module, to promote junction service.Example
Such as, NFC module can be based on RF identification (RFID) technology, Infrared Data Association (IrDA) technology, ultra broadband (UWB) technology,
Bluetooth (BT) technology and other technologies are realizing.
In the exemplary embodiment, device 800 can be by one or more application specific integrated circuits (ASIC), numeral letter
Number processor (DSP), digital signal processing appts (DSPD), PLD (PLD), field programmable gate array
(FPGA), controller, microcontroller, microprocessor or other electronic components realizations, for performing above-mentioned input expression information
Method.
In the exemplary embodiment, a kind of non-transitorycomputer readable storage medium including instruction, example are additionally provided
Such as include the memorizer 804 of instruction, above-mentioned instruction can be performed to complete above-mentioned input expression letter by the processor 820 of device 800
The method of breath.For example, the non-transitorycomputer readable storage medium can be ROM, random access memory (RAM), CD-
ROM, tape, floppy disk and optical data storage devices etc..
Those skilled in the art will readily occur to other embodiment party of the disclosure after considering description and putting into practice the disclosure
Case.The application is intended to any modification, purposes or the adaptations of the disclosure, these modifications, purposes or adaptability
Change follows the general principle of the disclosure and including the undocumented common knowledge in the art of the disclosure or usual skill
Art means.Description and embodiments are considered only as exemplary, and the true scope of the disclosure and spirit are by claim below
Point out.
It should be appreciated that the disclosure is not limited to the precision architecture for being described above and being shown in the drawings, and
And can without departing from the scope carry out various modifications and changes.The scope of the present disclosure is only limited by appended claim.
Claims (19)
1. it is a kind of input expression information method, it is characterised in that methods described includes:
Obtain importer target signature information, the target signature information include it is following at least one:Face feature information, limb
Body characteristicses information;
Obtain the corresponding target expression information of the target signature information;
It is input into the target expression information.
2. method according to claim 1, it is characterised in that the target signature information of the acquisition importer, including:
Obtain target information, the target information include it is following at least one:First image information, audio-frequency information;
The target signature information is obtained from the target information.
3. method according to claim 2, it is characterised in that the acquisition target information, including:
The target information is gathered by collecting device;Or,
Obtain the target information that the importer selectes from local data base.
4. method according to claim 1, it is characterised in that methods described also includes:
Target database is obtained, the target database includes corresponding between the characteristic information and expression information of instruction importer
Relation;
The corresponding target expression packet of the target signature information that obtains is included:
The corresponding target expression information of the target signature information is obtained according to the target database.
5. method according to claim 4, it is characterised in that the target expression packet includes following any one:Expression
Icon information, emoticon information, the second image information;
Wherein, second image information is according to described first image acquisition of information.
6. method according to claim 5, it is characterised in that methods described also includes:
Determine whether the target database includes the corresponding pass between the target signature information and the target expression information
System;
It is described to obtain the corresponding target expression information of the target signature information, including:
When the target database does not include the corresponding relation between the target signature information and the target expression information,
Using described first image information as second image information, the target expression information is obtained;Or, to described
One image information carries out process and obtains second image information, and believes second image information as target expression
Breath.
7. method according to claim 6, it is characterised in that described process is carried out to described first image information to obtain institute
The second image information is stated, and is included second image information as target expression packet:
Obtain the model image that the importer selectes;
Described first image information and the model image are carried out into synthesis and obtains the second image information, by second image letter
Breath is used as the target expression information.
8. method according to claim 7, it is characterised in that described by described first image information and the model image
Carry out synthesis and obtain the second image information including:
Extract the characteristic information of the user in described first image information;
The characteristic information of the user is added into the image-region that the importer selectes to the model image.
9. method according to claim 6, it is characterised in that described process is carried out to described first image information to obtain institute
The second image information is stated, and is included second image information as target expression packet:
Obtain the image parameter of described first image information;
Described image parameter adjustment to the target component that the importer determines is obtained into second image information, and will be described
Second image information is used as the target expression information.
10. it is a kind of input expression information device, it is characterised in that described device includes:
First acquisition module, be configured to obtain importer target signature information, the target signature information include with down to
One item missing:Face feature information, limbs characteristic information;
Second acquisition module, is configured to obtain the corresponding target expression information of the target signature information;
Input module, is configured as input to the target expression information.
11. devices according to claim 10, it is characterised in that first acquisition module includes:
First acquisition submodule, be configured to obtain target information, the target information include it is following at least one:First image
Information, audio-frequency information;
Second acquisition submodule, is configured to from the target information obtain the target signature information.
12. devices according to claim 11, it is characterised in that first acquisition submodule, are configured to adopt
Collection equipment gathers the target information;Or, obtain the target information that the importer selectes from local data base.
13. devices according to claim 10, it is characterised in that described device also includes:
3rd acquisition module, is configured to obtain target database, and the target database includes indicating the feature letter of importer
Corresponding relation between breath and expression information;
Second acquisition module, is configured to according to the target database acquisition target signature information is corresponding
Target expression information.
14. devices according to claim 13, it is characterised in that the target expression packet includes following any one:Table
Feelings icon information, emoticon information, the second image information;
Wherein, second image information is according to described first image acquisition of information.
15. devices according to claim 14, it is characterised in that described device also includes:
Determining module, is configured to determine that whether the target database expresses one's feelings including the target signature information with the target
Corresponding relation between information;
Second acquisition module, being configured as the target database does not include the target signature information and the target
During corresponding relation between expression information, using described first image information as second image information, the target is obtained
Expression information;Or, process is carried out to described first image information and obtains second image information, and by second image
Information is used as the target expression information.
16. devices according to claim 15, it is characterised in that second acquisition module, are configured to obtain described
The model image that importer selectes;Described first image information and the model image are carried out into synthesis and obtains the second image letter
Breath, using second image information as the target expression information.
17. devices according to claim 16, it is characterised in that second acquisition module, are configured to extract described
The characteristic information of the user in the first image information;The characteristic information of the user is added into the model image described defeated
The image-region that the person of entering selectes.
18. devices according to claim 15, it is characterised in that second acquisition module, are configured to obtain described
The image parameter of the first image information;Described image parameter adjustment to the target component that the importer determines is obtained into described
Two image informations, and using second image information as the target expression information.
19. a kind of devices of input expression information, it is characterised in that include:
Processor;
For storing the memorizer of processor executable;
Wherein, the processor is configured to:The target signature information of importer is obtained, the target signature information includes following
At least one:Face feature information, limbs characteristic information;Obtain the corresponding target expression information of the target signature information;It is defeated
Enter the target expression information.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611188433.XA CN106649712B (en) | 2016-12-20 | 2016-12-20 | Method and device for inputting expression information |
US15/837,772 US20180173394A1 (en) | 2016-12-20 | 2017-12-11 | Method and apparatus for inputting expression information |
EP17207154.0A EP3340077B1 (en) | 2016-12-20 | 2017-12-13 | Method and apparatus for inputting expression information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611188433.XA CN106649712B (en) | 2016-12-20 | 2016-12-20 | Method and device for inputting expression information |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106649712A true CN106649712A (en) | 2017-05-10 |
CN106649712B CN106649712B (en) | 2020-03-03 |
Family
ID=58834331
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611188433.XA Active CN106649712B (en) | 2016-12-20 | 2016-12-20 | Method and device for inputting expression information |
Country Status (3)
Country | Link |
---|---|
US (1) | US20180173394A1 (en) |
EP (1) | EP3340077B1 (en) |
CN (1) | CN106649712B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109254669A (en) * | 2017-07-12 | 2019-01-22 | 腾讯科技(深圳)有限公司 | A kind of expression picture input method, device, electronic equipment and system |
CN109670393A (en) * | 2018-09-26 | 2019-04-23 | 平安科技(深圳)有限公司 | Human face data acquisition method, unit and computer readable storage medium |
JP2019129413A (en) * | 2018-01-24 | 2019-08-01 | 株式会社見果てぬ夢 | Broadcast wave receiving device, broadcast reception method, and broadcast reception program |
WO2020228208A1 (en) * | 2019-05-13 | 2020-11-19 | 深圳传音控股股份有限公司 | User smart device and emoticon processing method therefor |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108985241B (en) * | 2018-07-23 | 2023-05-02 | 腾讯科技(深圳)有限公司 | Image processing method, device, computer equipment and storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102801652A (en) * | 2012-08-14 | 2012-11-28 | 上海量明科技发展有限公司 | Method, client and system for adding contact persons through expression data |
CN103442137A (en) * | 2013-08-26 | 2013-12-11 | 苏州跨界软件科技有限公司 | Method for allowing a user to look over virtual face of opposite side in mobile phone communication |
CN103647922A (en) * | 2013-12-20 | 2014-03-19 | 百度在线网络技术(北京)有限公司 | Virtual video call method and terminals |
CN103916536A (en) * | 2013-01-07 | 2014-07-09 | 三星电子株式会社 | Mobile device user interface method and system |
CN104635930A (en) * | 2015-02-09 | 2015-05-20 | 联想(北京)有限公司 | Information processing method and electronic device |
US20150220774A1 (en) * | 2014-02-05 | 2015-08-06 | Facebook, Inc. | Ideograms for Captured Expressions |
US20150242679A1 (en) * | 2014-02-25 | 2015-08-27 | Facebook, Inc. | Techniques for emotion detection and content delivery |
CN105897551A (en) * | 2015-02-13 | 2016-08-24 | 国际商业机器公司 | Point In Time Expression Of Emotion Data Gathered From A Chat Session |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050137015A1 (en) * | 2003-08-19 | 2005-06-23 | Lawrence Rogers | Systems and methods for a role-playing game having a customizable avatar and differentiated instant messaging environment |
US8210848B1 (en) * | 2005-03-07 | 2012-07-03 | Avaya Inc. | Method and apparatus for determining user feedback by facial expression |
JP2007041988A (en) * | 2005-08-05 | 2007-02-15 | Sony Corp | Information processing device, method and program |
US20100177116A1 (en) * | 2009-01-09 | 2010-07-15 | Sony Ericsson Mobile Communications Ab | Method and arrangement for handling non-textual information |
TWI430189B (en) * | 2009-11-10 | 2014-03-11 | Inst Information Industry | System, apparatus and method for message simulation |
WO2013077076A1 (en) * | 2011-11-24 | 2013-05-30 | 株式会社エヌ・ティ・ティ・ドコモ | Expression output device and expression output method |
WO2014036708A1 (en) * | 2012-09-06 | 2014-03-13 | Intel Corporation | System and method for avatar creation and synchronization |
US10289265B2 (en) * | 2013-08-15 | 2019-05-14 | Excalibur Ip, Llc | Capture and retrieval of a personalized mood icon |
US9264770B2 (en) * | 2013-08-30 | 2016-02-16 | Rovi Guides, Inc. | Systems and methods for generating media asset representations based on user emotional responses |
JP2016009453A (en) * | 2014-06-26 | 2016-01-18 | オムロン株式会社 | Face authentication device and face authentication method |
US20160191958A1 (en) * | 2014-12-26 | 2016-06-30 | Krush Technologies, Llc | Systems and methods of providing contextual features for digital communication |
WO2016014597A2 (en) * | 2014-07-21 | 2016-01-28 | Feele, A Partnership By Operation Of Law | Translating emotions into electronic representations |
CN105184249B (en) * | 2015-08-28 | 2017-07-18 | 百度在线网络技术(北京)有限公司 | Method and apparatus for face image processing |
JP6985005B2 (en) * | 2015-10-14 | 2021-12-22 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America | Emotion estimation method, emotion estimation device, and recording medium on which the program is recorded. |
-
2016
- 2016-12-20 CN CN201611188433.XA patent/CN106649712B/en active Active
-
2017
- 2017-12-11 US US15/837,772 patent/US20180173394A1/en not_active Abandoned
- 2017-12-13 EP EP17207154.0A patent/EP3340077B1/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102801652A (en) * | 2012-08-14 | 2012-11-28 | 上海量明科技发展有限公司 | Method, client and system for adding contact persons through expression data |
CN103916536A (en) * | 2013-01-07 | 2014-07-09 | 三星电子株式会社 | Mobile device user interface method and system |
CN103442137A (en) * | 2013-08-26 | 2013-12-11 | 苏州跨界软件科技有限公司 | Method for allowing a user to look over virtual face of opposite side in mobile phone communication |
CN103647922A (en) * | 2013-12-20 | 2014-03-19 | 百度在线网络技术(北京)有限公司 | Virtual video call method and terminals |
US20150220774A1 (en) * | 2014-02-05 | 2015-08-06 | Facebook, Inc. | Ideograms for Captured Expressions |
US20150242679A1 (en) * | 2014-02-25 | 2015-08-27 | Facebook, Inc. | Techniques for emotion detection and content delivery |
CN104635930A (en) * | 2015-02-09 | 2015-05-20 | 联想(北京)有限公司 | Information processing method and electronic device |
CN105897551A (en) * | 2015-02-13 | 2016-08-24 | 国际商业机器公司 | Point In Time Expression Of Emotion Data Gathered From A Chat Session |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109254669A (en) * | 2017-07-12 | 2019-01-22 | 腾讯科技(深圳)有限公司 | A kind of expression picture input method, device, electronic equipment and system |
CN109254669B (en) * | 2017-07-12 | 2022-05-10 | 腾讯科技(深圳)有限公司 | Expression picture input method and device, electronic equipment and system |
JP2019129413A (en) * | 2018-01-24 | 2019-08-01 | 株式会社見果てぬ夢 | Broadcast wave receiving device, broadcast reception method, and broadcast reception program |
JP7017755B2 (en) | 2018-01-24 | 2022-02-09 | 株式会社見果てぬ夢 | Broadcast wave receiver, broadcast reception method, and broadcast reception program |
CN109670393A (en) * | 2018-09-26 | 2019-04-23 | 平安科技(深圳)有限公司 | Human face data acquisition method, unit and computer readable storage medium |
CN109670393B (en) * | 2018-09-26 | 2023-12-19 | 平安科技(深圳)有限公司 | Face data acquisition method, equipment, device and computer readable storage medium |
WO2020228208A1 (en) * | 2019-05-13 | 2020-11-19 | 深圳传音控股股份有限公司 | User smart device and emoticon processing method therefor |
Also Published As
Publication number | Publication date |
---|---|
US20180173394A1 (en) | 2018-06-21 |
CN106649712B (en) | 2020-03-03 |
EP3340077A1 (en) | 2018-06-27 |
EP3340077B1 (en) | 2019-04-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108363706B (en) | Method and device for man-machine dialogue interaction | |
CN107894833B (en) | Multi-modal interaction processing method and system based on virtual human | |
CN105119812B (en) | In the method, apparatus and terminal device of chat interface change emoticon | |
EP3340077B1 (en) | Method and apparatus for inputting expression information | |
CN107370887B (en) | Expression generation method and mobile terminal | |
CN110188177A (en) | Talk with generation method and device | |
CN109660728B (en) | Photographing method and device | |
CN107832784B (en) | Image beautifying method and mobile terminal | |
CN105825486A (en) | Beautifying processing method and apparatus | |
CN109691074A (en) | The image data of user's interaction for enhancing | |
CN105930035A (en) | Interface background display method and apparatus | |
KR101170338B1 (en) | Method For Video Call And System thereof | |
CN106464939A (en) | Method and device for playing sound effect | |
CN109819167B (en) | Image processing method and device and mobile terminal | |
CN105302315A (en) | Image processing method and device | |
CN107832036A (en) | Sound control method, device and computer-readable recording medium | |
CN110147467A (en) | A kind of generation method, device, mobile terminal and the storage medium of text description | |
CN113051427A (en) | Expression making method and device | |
CN109168062A (en) | Methods of exhibiting, device, terminal device and the storage medium of video playing | |
WO2018098968A9 (en) | Photographing method, apparatus, and terminal device | |
CN104333688B (en) | The device and method of image formation sheet feelings symbol based on shooting | |
CN109308178A (en) | A kind of voice drafting method and its terminal device | |
CN109033423A (en) | Simultaneous interpretation caption presentation method and device, intelligent meeting method, apparatus and system | |
CN107529699A (en) | Control method of electronic device and device | |
CN111526287A (en) | Image shooting method, image shooting device, electronic equipment, server, image shooting system and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |