CN109510897A - A kind of expression picture management method and mobile terminal - Google Patents
A kind of expression picture management method and mobile terminal Download PDFInfo
- Publication number
- CN109510897A CN109510897A CN201811253363.0A CN201811253363A CN109510897A CN 109510897 A CN109510897 A CN 109510897A CN 201811253363 A CN201811253363 A CN 201811253363A CN 109510897 A CN109510897 A CN 109510897A
- Authority
- CN
- China
- Prior art keywords
- expression
- expression picture
- picture
- meaning
- mobile terminal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72469—User interfaces specially adapted for cordless or mobile telephones for operating the device by selecting functions from two or more displayed items, e.g. menus or icons
- H04M1/72472—User interfaces specially adapted for cordless or mobile telephones for operating the device by selecting functions from two or more displayed items, e.g. menus or icons wherein the items are sorted according to specific criteria, e.g. frequency of use
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
- H04M1/7243—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
- H04M1/72439—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
Abstract
The embodiment of the invention provides a kind of expression picture management method and mobile terminals, are related to field of communication technology, to solve the problem of that user's complex steps when selecting expression picture are relatively time-consuming to reduce the communication efficiency of user.Wherein, the expression picture management method, comprising: obtain the expression picture in mobile terminal;Identify the meaning of the expression picture;According to the meaning of the expression picture, expression picture distribution is grouped in expression corresponding with the expression picture meaning.Expression picture management method in the embodiment of the present invention is applied in mobile terminal.
Description
Technical field
The present embodiments relate to field of communication technology more particularly to a kind of expression picture management method and mobile terminals.
Background technique
People often use various expression pictures and carry out communication and emotional expression when using mobile terminal talking.Phase
Than in literal expression, expression picture is more vivid, and more interesting, the interest of user's chat can be improved.
In existing expression picture Managed Solution, classification is managed according to the theme of expression picture.User is chatting
When, if it is desired to the emotions such as expression indignation, happy can only select the expression picture admired in the expression packet of various themes, currently,
User, which has no rule when selecting expression picture, to follow, so that user wastes time on selection expression picture, reduce user's
Communication efficiency, and then influence the chat experience of user.
Summary of the invention
The embodiment of the present invention provides a kind of expression picture management method, numerous to solve user's step when selecting expression picture
It is trivial relatively time-consuming, thus the problem of reducing the communication efficiency of user.
In order to solve the above-mentioned technical problem, the present invention is implemented as follows: a kind of expression picture management method, is applied to move
Dynamic terminal, comprising: obtain the expression picture in the mobile terminal;Identify the meaning of the expression picture;According to the expression
Expression picture distribution is grouped by the meaning of picture in expression corresponding with the expression picture meaning.
In a first aspect, the embodiment of the invention also provides a kind of mobile terminals, comprising: module is obtained, it is described for obtaining
Expression picture in mobile terminal;Identification module, for identification meaning of the expression picture;Distribution module, for according to institute
Expression picture distribution is grouped by the meaning for stating expression picture in expression corresponding with the expression picture meaning.
Second aspect, the embodiment of the invention also provides a kind of mobile terminal, including processor, memory is stored in institute
The computer program that can be run on memory and on the processor is stated, when the computer program is executed by the processor
The step of realizing the expression picture management method.
The third aspect, it is described computer-readable to deposit the embodiment of the invention also provides a kind of computer readable storage medium
Computer program is stored on storage media, the computer program realizes the expression picture management method when being executed by processor
The step of.
In embodiments of the present invention, the expression picture in mobile terminal can be obtained, and phase is carried out to the expression picture of acquisition
The identification for closing meaning, to be classified according to its meaning, be distributed after identifying expression picture meaning to be expressed
It is grouped to expression corresponding with meaning.Such as, it can be distributed to expression belonging to meaning and be grouped;For another example, one is obtained according to meaning
A classificating word is distributed to expression relevant to this classificating word and is grouped.As it can be seen that the above process is based on, in mobile terminal
Expression picture can be sorted out according to the core meaning of its expression, realize the semantic management of expression picture, meet user to choosing
Select the core demand of expression.When user is intended by some meaning, such as express a certain movement, a certain emotion, a certain mood
Deng the meaning grouping that user is intended by can directly being found from expression grouping, to quickly determine oneself heart in the grouping
The expression picture of instrument, and then simplify user and have no the troublesome operation of rule search in a large amount of expression picture, save use
The time that family selects expression picture to expend improves the communication efficiency of user, optimization chat experience.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below by institute in the description to the embodiment of the present invention
Attached drawing to be used is needed to be briefly described, it should be apparent that, the accompanying drawings in the following description is only some implementations of the invention
Example, for those of ordinary skill in the art, without any creative labor, can also be according to these attached drawings
Obtain other attached drawings.
Fig. 1 is one of the flow chart of the expression picture management method of the embodiment of the present invention;
Fig. 2 is the two of the flow chart of the expression picture management method of the embodiment of the present invention;
Fig. 3 is the three of the flow chart of the expression picture management method of the embodiment of the present invention;
Fig. 4 is the four of the flow chart of the expression picture management method of the embodiment of the present invention;
Fig. 5 is the five of the flow chart of the expression picture management method of the embodiment of the present invention;
Fig. 6 is one of block diagram of mobile terminal of the embodiment of the present invention;
Fig. 7 is the three of the block diagram of the mobile terminal of the embodiment of the present invention;
Fig. 8 is the three of the block diagram of the mobile terminal of the embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair
Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts
Example, shall fall within the protection scope of the present invention.
Referring to Fig. 1, the flow chart of the expression picture management method of one embodiment of the invention is shown, is applied to mobile whole
End, comprising:
Step S1: the expression picture in mobile terminal is obtained.
In this step, the expression picture of acquisition for mobile terminal includes but is not limited to the expression figure downloaded in social software
Piece, the expression picture of user's collection, the expression picture in theme expression packet.
Expression picture includes letter expressing picture, image expression picture, expression picture of character image combination etc., expression figure
Text, image in piece etc. can Dynamically Announce, can also static status display.
Step S2: the meaning of expression picture is identified.
In this step, the meaning of expression picture can be identified from many aspects.For example, can from the emotion attribute of expression picture into
Row analysis identification, to identify that core expressed by expression picture is meant that the emotions such as happiness, sadness;It for another example, can be from expression figure
The text semanteme of piece carries out analysis identification;For another example, analysis identification can be carried out from the picture material of expression picture.
Step S3: according to the meaning of expression picture, by expression picture distribution in expression corresponding with expression picture meaning point
Group.
In this step, it can will express that meaning is identical or relevant expression picture is classified as a kind of expression picture, thus will
These expression pictures are collected in the grouping of same expression, and then when user is intended by this meaning, can be directly from corresponding expression
It is chosen in grouping.
Preferably, the meaning difference in the present embodiment according to expression picture establishes the grouping of multiple groups expression, specifically can foundation
The personal use habit of user selects many factors such as preference, emotion attribute, mood semanteme to establish expression grouping, thus knowing
After being clipped to the meaning of expression picture, can extract key content therein and be allocated, by expression picture distribute to meaning pair
The expression grouping answered.
Application in practice, personal use based on user habit, user often send " peppery eyes ", " removing brick ",
The relevant expression picture such as " praising " can then establish the expression grouping of " peppery eyes ", " removing brick ", " praising " respectively;For another example, it is based on user
Selection preference, when expression is glad, often send out relevant expressions such as " heartily ", then can establish expression grouping " heartily ";Again
Such as, based on the difference of emotion attribute, the expression grouping of " happiness ", " sad " can be established respectively;For another example, user often sends out " too
The relevant expression such as severity ", " praising well ", " excellent ", then can be semantic based on mood, establishes the expression of " praise ", " commending "
Grouping.
And for same expression picture, according to the direction difference for analyzing it identification meaning, can be distributed to multiple tables
In mutual affection group.Such as: expression picture is the picture of a laugh, which can at least distribute the expression point at " happiness "
Group and the expression of " heartily " are grouped.
In this step, can according to the meaning of the expression picture recognized, obtain one for distribution information, by this
Information is defined as classificating word, is grouped so as to be distributed expression picture according to classificating word in the corresponding expression of the classificating word, including
Expression grouping, expression relevant to the classificating word grouping established based on the classificating word.For example, according to the text in expression picture
Or image, obtain " removing brick " this classificating word, thus by the expression picture recognized be classified as " removing brick " expression grouping or
The expression of " brick " is grouped.
In embodiments of the present invention, the expression picture in mobile terminal can be obtained, and phase is carried out to the expression picture of acquisition
The identification for closing meaning, to be classified according to its meaning, be distributed after identifying expression picture meaning to be expressed
It is grouped to expression corresponding with meaning.Such as, it can be distributed to expression belonging to meaning and be grouped;For another example, one is obtained according to meaning
A classificating word is distributed to expression relevant to this classificating word and is grouped.As it can be seen that the above process is based on, in mobile terminal
Expression picture can be sorted out according to the core meaning of its expression, realize the semantic management of expression picture, meet user to choosing
Select the core demand of expression.When user is intended by some meaning, such as express a certain movement, a certain emotion, a certain mood
Deng the meaning grouping that user is intended by can directly being found from expression grouping, to quickly determine oneself heart in the grouping
The expression picture of instrument, and then simplify user and have no the troublesome operation of rule search in a large amount of expression picture, save use
The time that family selects expression picture to expend improves the communication efficiency of user, optimization chat experience.
On the basis of embodiment shown in Fig. 1, Fig. 2 shows the expression picture management methods of one embodiment of the invention
Flow chart, step S2 include:
Step S21: if detecting text information in expression picture, the meaning of text information is identified.
In the present embodiment, this dimension of information type that can include from expression picture considers, to expressed by expression picture
Core meaning identified.In one case, the information type for including in expression picture is text information type.
After acquisition for mobile terminal expression picture, can detect first expression picture whether include text information comprising
The expression picture of text information can be the expression picture of pure words, can also be the expression picture that writings and image combines.When
After detecting text information in expression picture, its meaning can be identified based on the literal semanteme of text information, it can also base
Its meaning is identified in the expressed emotion of text information, mood.
It is grouped for example, all expression pictures comprising " removing brick " this text information can be distributed to an expression;For another example,
All expression pictures comprising this kind of text information such as " happy " and " happiness " can be distributed to an expression and be grouped.
Preferably, using optical character identification (Optical Character Recognition, abbreviation OCR) technology
It identifies the text information in expression picture, then carries out the semantic summary of semantic analysis and big data, to identify text information
Meaning, to obtain the meaning of the expression picture or obtain the classificating word of the expression picture.
In the present embodiment, because meaning expressed by text information is more easy to identify, therefore in the meaning of identification expression picture
When can preferentially identify the meaning of text information.
Wherein, if considering from this dimension of the state of expression picture, core meaning expressed by expression picture is known
, in one case, text information is not static status display, in another case, text information be Dynamically Announce, but no matter which kind of
Situation identifies the text information in expression picture using OCR technique.
On the basis of embodiment shown in Fig. 1, Fig. 3 shows the expression picture management method of one embodiment of the invention
Flow chart, step S2 include:
Step S22: if not detecting text information in expression picture, the image information in expression picture is identified.
In the present embodiment, this dimension of information type for continuing to include from expression picture considers, to expression picture institute table
The core meaning reached is identified.In another case, the information type for including in expression picture is image information type.
After acquisition for mobile terminal expression picture, can detect whether expression picture includes text information first.The present embodiment
In the case where text information is not detected, the image information that expression picture includes further is identified.Wherein, text is not detected
The expression picture of information includes that the expression picture of pure image and writings and image combine but can not identify text meaning
Expression picture.
Preferably, using the image information in image recognition technology identification expression picture.
Wherein, if considering from this dimension of the state of expression picture, core meaning expressed by expression picture is known
Not, in one case, the image information in expression picture is static status display.Accordingly, step S3 includes:
Step S31: if image information be the first static image information, obtain in the first static image information include to
Few first object.
Preferably, analysis identification is carried out to the first static image information using image recognition technology.It illustratively, can be right
The content body of first static image information carries out Target Segmentation, to obtain at least one for including in the first static image information
First object.
Wherein, the first object is not limited to personage, material object, scenery etc..
In practical applications, if content expressed by the first static image information is: a people is removing brick, thus this
In step, can by the content body to the first static image information carry out Target Segmentation, at least obtain " people " and " brick " this two
A first object.
Step S32: it according at least one first object of acquisition, is obtained and the first object matching in the grouping of multiple expressions
First object expression picture.
The first object based on acquisition can retrieve expression picture in big data or mobile terminal, to obtain and first
The first object expression picture of object matching.
In practical applications, matching process is such as: retrieval includes the expression picture of image information, by the first object and expression
Image information in picture compares, if the image information retrieved in first object expression picture is similar to the first object
Degree is greater than preset value, such as 80%, then successful match.For example, the first object is the image of " brick ", it can will include the table of " brick " image
Feelings picture is determined as first object expression picture.
For another example, retrieval includes the expression picture of text information, by the text information in the first object and expression picture into
Row comparison, if the similarity for retrieving the text information and the first object in first object expression picture is greater than preset value, such as
80%, then successful match.For example, the first object is the image of " brick ", " brick " is identified using image recognition technology, so as to incite somebody to action
The expression picture of text information comprising " brick " meaning is determined as first object expression picture.
Wherein, different based on matched first object is carried out, the first object expression picture matched is also different.Into one
Step ground, can be comprehensive to determine first object expression picture based on the matching result of multiple first objects;It may be based on multiple first pairs
The matching result of elephant determines multiple first object expression pictures.
Step S33: by the expression picture distribution grouping of the expression belonging to first object expression picture.
Here the expression picture in preferred retrieval expression grouping is matched, so as to by expression picture to be allocated and the
The distribution of one target expression picture is grouped in same expression.
The expression picture that can also be retrieved in non-expression grouping is matched, can when being matched to first object expression picture
Expression picture to be allocated and first object expression picture are associated, obtain classificating word further according to association results, thus into
Row distribution.
For example, the object of same shape is matched in first object expression picture if the first object is the image of " brick ",
And the text of " brick " is identified in first object expression picture, thus it is believed that the text of the first object recognized and " brick "
Correlation, and then the expression picture that can be will acquire is distributed to the expression grouping for such as " removing brick " etc.
It is further understood that, in the case where the expression picture in retrieving non-expression and being grouped is matched, if successful match
First object expression picture is preferably allocated first object expression picture, then based on expression picture to be allocated and
The incidence relation of one target expression picture distributes expression picture and first object expression picture to be allocated in same expression point
Group.
And understand in a broad sense, because the expression picture of acquisition is classified according to first object expression picture, therefore can think
It arrives, is that can directly divide according to the meaning of first object expression picture itself when being allocated to first object expression picture
Match, without matching relevant expression picture, so that first object expression picture can also be regarded as to the table in expression grouping
Feelings picture.
It should be noted that can be retrieved based on multiple first objects, it can also be based on multiple first objects, intelligent recognition
Main first object is retrieved out.
The present embodiment is mainly for the static expression picture that can not identify text information, by retrieving other expression pictures,
To obtain the classificating word of expression picture, determine that affiliated expression is grouped according to classificating word, highly preferred scheme is directly by table
Feelings picture expression picture related to what is retrieved is divided into one group.
In addition, knowing if considering from this dimension of the state of expression picture to core meaning expressed by expression picture
Not, in another case, the image information in expression picture is Dynamically Announce.
Accordingly, on the basis of embodiment shown in Fig. 1, Fig. 4 shows the expression picture pipe of one embodiment of the invention
The flow chart of reason method, step S2 include:
Step S23: if not detecting text information in expression picture, the image information in expression picture is identified.
Step S3 includes:
Step S34: if image information is dynamic image data, at least acquisition dynamic image data at any one time the
Two static image informations.
Step S35: at least one second object for including in the second static image information is obtained.
Step S36: it according at least one second object of acquisition, is obtained and the second object matching in the grouping of multiple expressions
The second target expression picture.
Step S37: by the expression picture distribution grouping of the expression belonging to the second target expression picture.
Different from the image information of static status display, the image information of Dynamically Announce changes in real time, in this case,
The dynamic image data of expression picture is split frame by frame using image recognition technology, that is, obtains the corresponding static map of every frame
As information.It is quiet as second that the corresponding static image information of key frame is further obtained in the corresponding static image information of every frame
State image information is matched to the second target expression according still further to the analysis recognition method of the static image information in previous embodiment
Picture.
Specifically, the corresponding static image information of every frame can be subjected to Target Segmentation respectively, to identify that dynamic image is believed
The target object for including in breath determines target object further according to the corresponding content in every frame static image information of target object
Path of motion in dynamic image data obtains the behavior act of target object, is acted by big data analysis affirming conduct,
To obtain more important static image information, the i.e. corresponding static image information of key frame based on behavior movement.For example, can
Based on the static image information that the behavior act that big data, intellectual analysis etc. choose target object varies widely, as pass
The corresponding static image information of key frame.
In the present embodiment, the quantity for the target object for including in dynamic image data is not limited to one, each target pair
As corresponding behavior act is also different, therefore, multiple target objects can be based on, obtain the corresponding second still image letter of multiframe
Breath, Corresponding matching to the second target expression picture be also multiple, therefore expression picture can be distributed to different expression groupings.
And for a target object, corresponding second static image information of multiframe, every frame corresponding second can also be obtained
Static image information can match a second target expression picture, can also match multiple second target expression pictures, therefore expression
Picture can be distributed to different expression groupings.
It is contemplated that text information shown by every frame can also be obtained in the expression picture of text information Dynamically Announce,
With text information shown by every frame according to acquisition, the classificating word of expression picture is obtained.Meanwhile it may be based on big data analysis
Confirm the behavior act of text information, to obtain crucial text information, obtains expression picture further according to crucial text information
Classificating word.
Preferably for the expression picture comprising image information, image directly can also be identified using image recognition technology
Information obtains classificating word further according to image information, to further distribute the expression grouping belonging to it.
On the basis of embodiment shown in Fig. 1, Fig. 5 shows the expression picture management method of one embodiment of the invention
Flow chart, after step s 3, further includes:
Step S4: the frequency of use information of multiple expression groupings in preset duration is obtained.
In this step, as unit of preset duration, by the frequency of use of all expression pictures in an expression grouping
It is counted, to obtain the frequency of use information of expression grouping, the frequency of use of multiple expression groupings is obtained according to the method
Information.
Step S5: according to the frequency of use sequence from high to low of multiple expressions grouping, the grouping of multiple expressions is arranged
Sequence.
In this step, the expression grouping that user is commonly used is come into front, successively pusher, to meet the individual of user
Habit, embodies the pertinent service feature of the present embodiment, optimizes the chat experience of user.
Further, the expression picture in being also grouped to every group of expression carries out the statistics of frequency of use, according to using frequency
Rate carries out priority ranking, and the common expression picture of user is preposition, further increases the usage experience of user.
It is noted that above embodiments not only can carry out Classification Management based on the expression picture that user is commonly used,
Can also expression data library, chat record etc. in automatic identification social software, automatically analyze other new expressions, be automatically added to every
The end of a expression grouping provides more selections to enrich the expression picture in each expression grouping for user.
Preferably, in order to identify expression picture in messaging list, expression picture can be shown as in messaging list
Text abstract, facilitates user's quickly previewing messaging list content.And the text abstract of expression picture can containing based on expression picture
Justice.
In conclusion above embodiments can automatic identification user often use the semanteme and the meanings such as emotional expression of expression picture,
And sorted out based on this, while personalized classification is carried out based on the use habit of each user, user is automated as to realize
Allocation table mutual affection group, more intelligent convenient management expression grouping and the efficiency for promoting user's lookup expression, to help user more
Accurately navigate to specific expression.
Fig. 6 shows the block diagram of the mobile terminal of another embodiment of the present invention, comprising:
Module 10 is obtained, for obtaining the expression picture in mobile terminal;
Identification module 20, for identification meaning of expression picture;
Distribution module 30 distributes expression picture corresponding with expression picture meaning for the meaning according to expression picture
Expression grouping.
In embodiments of the present invention, the expression picture in mobile terminal can be obtained, and phase is carried out to the expression picture of acquisition
The identification for closing meaning, to be classified according to its meaning, be distributed after identifying expression picture meaning to be expressed
It is grouped to expression corresponding with meaning.Such as, it can be distributed to expression belonging to meaning and be grouped;For another example, one is obtained according to meaning
A classificating word is distributed to expression relevant to this classificating word and is grouped.As it can be seen that the above process is based on, in mobile terminal
Expression picture can be sorted out according to the core meaning of its expression, realize the semantic management of expression picture, meet user to choosing
Select the core demand of expression.When user is intended by some meaning, such as express a certain movement, a certain emotion, a certain mood
Deng the meaning grouping that user is intended by can directly being found from expression grouping, to quickly determine oneself heart in the grouping
The expression picture of instrument, and then simplify user and have no the troublesome operation of rule search in a large amount of expression picture, save use
The time that family selects expression picture to expend improves the communication efficiency of user, optimization chat experience.
On the basis of embodiment shown in Fig. 6, Fig. 7 shows the block diagram of the mobile terminal of another embodiment of the present invention,
Identification module 20 includes:
Word recognition unit 21, if identifying the meaning of text information for detecting text information in expression picture.
Preferably, identification module 20 includes:
First image identification unit 22, if identifying expression picture for not detecting text information in expression picture
In image information;
Distribution module 30 includes:
First object acquisition unit 31 obtains the first static map if being the first static image information for image information
As at least one first object for including in information;
First object acquiring unit 32 obtains in the grouping of multiple expressions at least one first object according to acquisition
Take the first object expression picture with the first object matching;
First matching unit 33, for expression picture distribution expression belonging to first object expression picture to be grouped.
Preferably, identification module 20 includes:
Second image identification unit 23, if identifying expression picture for not detecting text information in expression picture
In image information;
Distribution module 30 includes:
Still image acquiring unit 34, if being dynamic image data for image information, at least acquisition dynamic image is believed
The second static image information of breath at any one time;
Second object acquisition unit 35, for obtaining at least one second object for including in the second static image information;
Second Target Acquisition unit 36 obtains in the grouping of multiple expressions at least one second object according to acquisition
Take the second target expression picture with the second object matching;
Second matching unit 37, for expression picture distribution expression belonging to the second target expression picture to be grouped.
Preferably, mobile terminal further include:
More groupings obtain module 40, for obtaining the frequency of use information of multiple expression groupings in preset duration;
More packet sequencing modules 50, the sequence of frequency of use from high to low for being grouped according to multiple expressions, to multiple
Expression grouping is ranked up.
Mobile terminal provided in an embodiment of the present invention can be realized mobile terminal in the embodiment of the method for Fig. 1 to Fig. 5 and realize
Each process, to avoid repeating, which is not described herein again.
A kind of hardware structural diagram of Fig. 8 mobile terminal of each embodiment to realize the present invention, the mobile terminal 100
Including but not limited to: radio frequency unit 101, audio output unit 103, input unit 104, sensor 105, is shown network module 102
Show the components such as unit 106, user input unit 107, interface unit 108, memory 109, processor 110 and power supply 111.
It will be understood by those skilled in the art that mobile terminal structure shown in Fig. 8 does not constitute the restriction to mobile terminal, it is mobile whole
End may include perhaps combining certain components or different component layouts than illustrating more or fewer components.In the present invention
In embodiment, mobile terminal includes but is not limited to mobile phone, tablet computer, laptop, palm PC, car-mounted terminal, can wear
Wear equipment and pedometer etc..
Wherein, processor 110, for obtaining the expression picture in the mobile terminal;Identify containing for the expression picture
Justice;According to the meaning of the expression picture, by expression picture distribution in expression corresponding with the expression picture meaning point
Group.
In embodiments of the present invention, the expression picture in mobile terminal can be obtained, and phase is carried out to the expression picture of acquisition
The identification for closing meaning, to be classified according to its meaning, be distributed after identifying expression picture meaning to be expressed
It is grouped to expression corresponding with meaning.Such as, it can be distributed to expression belonging to meaning and be grouped;For another example, one is obtained according to meaning
A classificating word is distributed to expression relevant to this classificating word and is grouped.As it can be seen that the above process is based on, in mobile terminal
Expression picture can be sorted out according to the core meaning of its expression, realize the semantic management of expression picture, meet user to choosing
Select the core demand of expression.When user is intended by some meaning, such as express a certain movement, a certain emotion, a certain mood
Deng the meaning grouping that user is intended by can directly being found from expression grouping, to quickly determine oneself heart in the grouping
The expression picture of instrument, and then simplify user and have no the troublesome operation of rule search in a large amount of expression picture, save use
The time that family selects expression picture to expend improves the communication efficiency of user, optimization chat experience.
It should be understood that the embodiment of the present invention in, radio frequency unit 101 can be used for receiving and sending messages or communication process in, signal
Send and receive, specifically, by from base station downlink data receive after, to processor 110 handle;In addition, by uplink
Data are sent to base station.In general, radio frequency unit 101 includes but is not limited to antenna, at least one amplifier, transceiver, coupling
Device, low-noise amplifier, duplexer etc..In addition, radio frequency unit 101 can also by wireless communication system and network and other set
Standby communication.
Mobile terminal provides wireless broadband internet by network module 102 for user and accesses, and such as user is helped to receive
It sends e-mails, browse webpage and access streaming video etc..
Audio output unit 103 can be received by radio frequency unit 101 or network module 102 or in memory 109
The audio data of storage is converted into audio signal and exports to be sound.Moreover, audio output unit 103 can also be provided and be moved
The relevant audio output of specific function that dynamic terminal 100 executes is (for example, call signal receives sound, message sink sound etc.
Deng).Audio output unit 103 includes loudspeaker, buzzer and receiver etc..
Input unit 104 is for receiving audio or video signal.Input unit 104 may include graphics processor
(Graphics Processing Unit, GPU) 1041 and microphone 1042, graphics processor 1041 is in video acquisition mode
Or the image data of the static images or video obtained in image capture mode by image capture apparatus (such as camera) carries out
Reason.Treated, and picture frame may be displayed on display unit 106.Through graphics processor 1041, treated that picture frame can be deposited
Storage is sent in memory 109 (or other storage mediums) or via radio frequency unit 101 or network module 102.Mike
Wind 1042 can receive sound, and can be audio data by such acoustic processing.Treated audio data can be
The format output that mobile communication base station can be sent to via radio frequency unit 101 is converted in the case where telephone calling model.
Mobile terminal 100 further includes at least one sensor 105, such as optical sensor, motion sensor and other biographies
Sensor.Specifically, optical sensor includes ambient light sensor and proximity sensor, wherein ambient light sensor can be according to environment
The light and shade of light adjusts the brightness of display panel 1061, and proximity sensor can close when mobile terminal 100 is moved in one's ear
Display panel 1061 and/or backlight.As a kind of motion sensor, accelerometer sensor can detect in all directions (general
For three axis) size of acceleration, it can detect that size and the direction of gravity when static, can be used to identify mobile terminal posture (ratio
Such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap);It passes
Sensor 105 can also include fingerprint sensor, pressure sensor, iris sensor, molecule sensor, gyroscope, barometer, wet
Meter, thermometer, infrared sensor etc. are spent, details are not described herein.
Display unit 106 is for showing information input by user or being supplied to the information of user.Display unit 106 can wrap
Display panel 1061 is included, liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode can be used
Forms such as (Organic Light-Emitting Diode, OLED) configure display panel 1061.
User input unit 107 can be used for receiving the number or character information of input, and generate the use with mobile terminal
Family setting and the related key signals input of function control.Specifically, user input unit 107 include touch panel 1071 and
Other input equipments 1072.Touch panel 1071, also referred to as touch screen collect the touch operation of user on it or nearby
(for example user uses any suitable objects or attachment such as finger, stylus on touch panel 1071 or in touch panel 1071
Neighbouring operation).Touch panel 1071 may include both touch detecting apparatus and touch controller.Wherein, touch detection
Device detects the touch orientation of user, and detects touch operation bring signal, transmits a signal to touch controller;Touch control
Device processed receives touch information from touch detecting apparatus, and is converted into contact coordinate, then gives processor 110, receiving area
It manages the order that device 110 is sent and is executed.Furthermore, it is possible to more using resistance-type, condenser type, infrared ray and surface acoustic wave etc.
Seed type realizes touch panel 1071.In addition to touch panel 1071, user input unit 107 can also include other input equipments
1072.Specifically, other input equipments 1072 can include but is not limited to physical keyboard, function key (such as volume control button,
Switch key etc.), trace ball, mouse, operating stick, details are not described herein.
Further, touch panel 1071 can be covered on display panel 1061, when touch panel 1071 is detected at it
On or near touch operation after, send processor 110 to determine the type of touch event, be followed by subsequent processing device 110 according to touching
The type for touching event provides corresponding visual output on display panel 1061.Although in fig. 8, touch panel 1071 and display
Panel 1061 is the function that outputs and inputs of realizing mobile terminal as two independent components, but in some embodiments
In, can be integrated by touch panel 1071 and display panel 1061 and realize the function that outputs and inputs of mobile terminal, it is specific this
Place is without limitation.
Interface unit 108 is the interface that external device (ED) is connect with mobile terminal 100.For example, external device (ED) may include having
Line or wireless head-band earphone port, external power supply (or battery charger) port, wired or wireless data port, storage card end
Mouth, port, the port audio input/output (I/O), video i/o port, earphone end for connecting the device with identification module
Mouthful etc..Interface unit 108 can be used for receiving the input (for example, data information, electric power etc.) from external device (ED) and
By one or more elements that the input received is transferred in mobile terminal 100 or can be used in 100 He of mobile terminal
Data are transmitted between external device (ED).
Memory 109 can be used for storing software program and various data.Memory 109 can mainly include storing program area
The storage data area and, wherein storing program area can (such as the sound of application program needed for storage program area, at least one function
Sound playing function, image player function etc.) etc.;Storage data area can store according to mobile phone use created data (such as
Audio data, phone directory etc.) etc..In addition, memory 109 may include high-speed random access memory, it can also include non-easy
The property lost memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts.
Processor 110 is the control centre of mobile terminal, utilizes each of various interfaces and the entire mobile terminal of connection
A part by running or execute the software program and/or module that are stored in memory 109, and calls and is stored in storage
Data in device 109 execute the various functions and processing data of mobile terminal, to carry out integral monitoring to mobile terminal.Place
Managing device 110 may include one or more processing units;Preferably, processor 110 can integrate application processor and modulatedemodulate is mediated
Manage device, wherein the main processing operation system of application processor, user interface and application program etc., modem processor is main
Processing wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor 110.
Mobile terminal 100 can also include the power supply 111 (such as battery) powered to all parts, it is preferred that power supply 111
Can be logically contiguous by power-supply management system and processor 110, to realize management charging by power-supply management system, put
The functions such as electricity and power managed.
In addition, mobile terminal 100 includes some unshowned functional modules, details are not described herein.
Preferably, the embodiment of the present invention also provides a kind of mobile terminal, including processor 110, and memory 109 is stored in
On memory 109 and the computer program that can run on the processor 110, the computer program are executed by processor 110
Each process of the above-mentioned expression picture management method embodiment of Shi Shixian, and identical technical effect can be reached, to avoid repeating,
Which is not described herein again.
The embodiment of the present invention also provides a kind of computer readable storage medium, and meter is stored on computer readable storage medium
Calculation machine program, the computer program realize each process of above-mentioned expression picture management method embodiment when being executed by processor,
And identical technical effect can be reached, to avoid repeating, which is not described herein again.Wherein, the computer readable storage medium,
Such as read-only memory (Read-Only Memory, abbreviation ROM), random access memory (Random Access Memory, letter
Claim RAM), magnetic or disk etc..
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row
His property includes, so that the process, method, article or the device that include a series of elements not only include those elements, and
And further include other elements that are not explicitly listed, or further include for this process, method, article or device institute it is intrinsic
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do
There is also other identical elements in the process, method of element, article or device.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art
The part contributed out can be embodied in the form of software products, which is stored in a storage medium
In (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that a terminal (can be mobile phone, computer, service
Device, air conditioner or network equipment etc.) execute method described in each embodiment of the present invention.
The embodiment of the present invention is described with above attached drawing, but the invention is not limited to above-mentioned specific
Embodiment, the above mentioned embodiment is only schematical, rather than restrictive, those skilled in the art
Under the inspiration of the present invention, without breaking away from the scope protected by the purposes and claims of the present invention, it can also make very much
Form belongs within protection of the invention.
Claims (11)
1. a kind of expression picture management method is applied to mobile terminal characterized by comprising
Obtain the expression picture in the mobile terminal;
Identify the meaning of the expression picture;
According to the meaning of the expression picture, by expression picture distribution in expression corresponding with the expression picture meaning point
Group.
2. the method according to claim 1, wherein the meaning of the identification expression picture, comprising:
If detecting text information in the expression picture, the meaning of the text information is identified.
3. the method according to claim 1, wherein the meaning of the identification expression picture, comprising:
If not detecting text information in the expression picture, the image information in the expression picture is identified;
The meaning according to the expression picture distributes the expression picture in table corresponding with the expression picture meaning
Mutual affection group, comprising:
If described image information is the first static image information, include in first static image information at least one is obtained
A first object;
At least one first object according to acquisition obtains the with first object matching in the grouping of multiple expressions
One target expression picture;
By the expression picture distribution grouping of the expression belonging to the first object expression picture.
4. the method according to claim 1, wherein the meaning of the identification expression picture, comprising:
If not detecting text information in the expression picture, the image information in the expression picture is identified;
The meaning according to the expression picture distributes the expression picture in table corresponding with the expression picture meaning
Mutual affection group, comprising:
If described image information is dynamic image data, the dynamic image data at any one time second quiet is at least obtained
State image information;
Obtain at least one second object for including in second static image information;
At least one second object according to acquisition obtains the with second object matching in the grouping of multiple expressions
Two target expression pictures;
By the expression picture distribution grouping of the expression belonging to the second target expression picture.
5. the method according to claim 1, wherein the meaning according to the expression picture, by the table
The distribution of feelings picture is after expression corresponding with expression picture meaning grouping, further includes:
Obtain the frequency of use information of multiple expression groupings in preset duration;
According to the frequency of use sequence from high to low of the multiple expression grouping, the grouping of the multiple expression is ranked up.
6. a kind of mobile terminal characterized by comprising
Module is obtained, for obtaining the expression picture in the mobile terminal;
Identification module, for identification meaning of the expression picture;
The expression picture is distributed and is contained with the expression picture for the meaning according to the expression picture by distribution module
The corresponding expression grouping of justice.
7. mobile terminal according to claim 6, which is characterized in that the identification module includes:
Word recognition unit, if identifying containing for the text information for detecting text information in the expression picture
Justice.
8. mobile terminal according to claim 6, which is characterized in that the identification module includes:
First image identification unit, if identifying the expression figure for not detecting text information in the expression picture
Image information in piece;
The distribution module includes:
It is static to obtain described first if being the first static image information for described image information for first object acquisition unit
The first object of at least one for including in image information;
First object acquiring unit obtains in the grouping of multiple expressions at least one first object according to acquisition
With the first object expression picture of first object matching;
First matching unit, for expression picture distribution expression belonging to the first object expression picture to be grouped.
9. mobile terminal according to claim 6, which is characterized in that the identification module includes:
Second image identification unit, if identifying the expression figure for not detecting text information in the expression picture
Image information in piece;
The distribution module includes:
Still image acquiring unit at least obtains the dynamic image if being dynamic image data for described image information
The second static image information of information at any one time;
Second object acquisition unit, for obtaining at least one second object for including in second static image information;
Second Target Acquisition unit obtains in the grouping of multiple expressions at least one second object according to acquisition
With the second target expression picture of second object matching;
Second matching unit, for expression picture distribution expression belonging to the second target expression picture to be grouped.
10. mobile terminal according to claim 6, which is characterized in that the mobile terminal further include:
More groupings obtain module, for obtaining the frequency of use information of multiple expression groupings in preset duration;
More packet sequencing modules, the sequence of frequency of use from high to low for being grouped according to the multiple expression, to described more
A expression grouping is ranked up.
11. a kind of mobile terminal, which is characterized in that including processor, memory is stored on the memory and can be described
The computer program run on processor is realized when the computer program is executed by the processor as in claim 1 to 5
The step of described in any item expression picture management methods.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811253363.0A CN109510897B (en) | 2018-10-25 | 2018-10-25 | Expression picture management method and mobile terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811253363.0A CN109510897B (en) | 2018-10-25 | 2018-10-25 | Expression picture management method and mobile terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109510897A true CN109510897A (en) | 2019-03-22 |
CN109510897B CN109510897B (en) | 2021-04-27 |
Family
ID=65745986
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811253363.0A Active CN109510897B (en) | 2018-10-25 | 2018-10-25 | Expression picture management method and mobile terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109510897B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110147791A (en) * | 2019-05-20 | 2019-08-20 | 上海联影医疗科技有限公司 | Character recognition method, device, equipment and storage medium |
CN110489578A (en) * | 2019-08-12 | 2019-11-22 | 腾讯科技(深圳)有限公司 | Image processing method, device and computer equipment |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1215867A2 (en) * | 2000-12-16 | 2002-06-19 | Samsung Electronics Co., Ltd. | Emoticon input method for mobile terminal |
CN102870081A (en) * | 2012-06-30 | 2013-01-09 | 华为技术有限公司 | Method and mobile terminal for dynamic display expressions |
US20140092101A1 (en) * | 2012-09-28 | 2014-04-03 | Samsung Electronics Co., Ltd. | Apparatus and method for producing animated emoticon |
US20150220774A1 (en) * | 2014-02-05 | 2015-08-06 | Facebook, Inc. | Ideograms for Captured Expressions |
CN104834677A (en) * | 2015-04-13 | 2015-08-12 | 苏州天趣信息科技有限公司 | Facial expression image displaying method and apparatus based on attribute category, and terminal |
CN105094363A (en) * | 2015-07-06 | 2015-11-25 | 百度在线网络技术(北京)有限公司 | Method and apparatus for processing emotion signal |
CN105930828A (en) * | 2016-04-15 | 2016-09-07 | 腾讯科技(深圳)有限公司 | Expression classification identification control method and device |
CN106127593A (en) * | 2016-06-28 | 2016-11-16 | 广东欧珀移动通信有限公司 | Emoticon processing method, device and terminal |
CN106303724A (en) * | 2016-08-15 | 2017-01-04 | 深圳Tcl数字技术有限公司 | Intelligent television adds the method and apparatus of dynamic expression automatically |
CN106445283A (en) * | 2016-09-09 | 2017-02-22 | 深圳市金立通信设备有限公司 | Emoticon acquisition method and terminal |
US20170083524A1 (en) * | 2015-09-22 | 2017-03-23 | Riffsy, Inc. | Platform and dynamic interface for expression-based retrieval of expressive media content |
CN106796583A (en) * | 2014-07-07 | 2017-05-31 | 机械地带有限公司 | System and method for recognizing and advising emoticon |
CN108701125A (en) * | 2015-12-29 | 2018-10-23 | Mz知识产权控股有限责任公司 | System and method for suggesting emoticon |
-
2018
- 2018-10-25 CN CN201811253363.0A patent/CN109510897B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1215867B1 (en) * | 2000-12-16 | 2008-09-24 | Samsung Electronics Co., Ltd. | Emoticon input method for mobile terminal |
EP1215867A2 (en) * | 2000-12-16 | 2002-06-19 | Samsung Electronics Co., Ltd. | Emoticon input method for mobile terminal |
CN102870081A (en) * | 2012-06-30 | 2013-01-09 | 华为技术有限公司 | Method and mobile terminal for dynamic display expressions |
US20140092101A1 (en) * | 2012-09-28 | 2014-04-03 | Samsung Electronics Co., Ltd. | Apparatus and method for producing animated emoticon |
US20150220774A1 (en) * | 2014-02-05 | 2015-08-06 | Facebook, Inc. | Ideograms for Captured Expressions |
CN106796583A (en) * | 2014-07-07 | 2017-05-31 | 机械地带有限公司 | System and method for recognizing and advising emoticon |
CN104834677A (en) * | 2015-04-13 | 2015-08-12 | 苏州天趣信息科技有限公司 | Facial expression image displaying method and apparatus based on attribute category, and terminal |
CN105094363A (en) * | 2015-07-06 | 2015-11-25 | 百度在线网络技术(北京)有限公司 | Method and apparatus for processing emotion signal |
US20170083524A1 (en) * | 2015-09-22 | 2017-03-23 | Riffsy, Inc. | Platform and dynamic interface for expression-based retrieval of expressive media content |
CN108701125A (en) * | 2015-12-29 | 2018-10-23 | Mz知识产权控股有限责任公司 | System and method for suggesting emoticon |
CN105930828A (en) * | 2016-04-15 | 2016-09-07 | 腾讯科技(深圳)有限公司 | Expression classification identification control method and device |
CN106127593A (en) * | 2016-06-28 | 2016-11-16 | 广东欧珀移动通信有限公司 | Emoticon processing method, device and terminal |
CN106303724A (en) * | 2016-08-15 | 2017-01-04 | 深圳Tcl数字技术有限公司 | Intelligent television adds the method and apparatus of dynamic expression automatically |
CN106445283A (en) * | 2016-09-09 | 2017-02-22 | 深圳市金立通信设备有限公司 | Emoticon acquisition method and terminal |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110147791A (en) * | 2019-05-20 | 2019-08-20 | 上海联影医疗科技有限公司 | Character recognition method, device, equipment and storage medium |
CN110489578A (en) * | 2019-08-12 | 2019-11-22 | 腾讯科技(深圳)有限公司 | Image processing method, device and computer equipment |
CN110489578B (en) * | 2019-08-12 | 2024-04-05 | 腾讯科技(深圳)有限公司 | Picture processing method and device and computer equipment |
Also Published As
Publication number | Publication date |
---|---|
CN109510897B (en) | 2021-04-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106845390B (en) | Video title generation method and device | |
WO2015127825A1 (en) | Expression input method and apparatus and electronic device | |
CN109508399A (en) | A kind of facial expression image processing method, mobile terminal | |
CN109917979A (en) | A kind of searching method and mobile terminal | |
CN108958867A (en) | A kind of task operating method and device of application | |
CN108494947A (en) | A kind of images share method and mobile terminal | |
CN109857905A (en) | A kind of video editing method and terminal device | |
CN108965977B (en) | Method, device, storage medium, terminal and system for displaying live gift | |
CN108494665A (en) | One population message display method and mobile terminal | |
CN109788136A (en) | Information display method and mobile terminal | |
CN109215655A (en) | The method and mobile terminal of text are added in video | |
CN110457086A (en) | A kind of control method of application program, mobile terminal and server | |
CN108460817A (en) | A kind of pattern splicing method and mobile terminal | |
CN109388456A (en) | A kind of head portrait selection method and mobile terminal | |
CN107862059A (en) | A kind of song recommendations method and mobile terminal | |
CN108093130A (en) | A kind of method and mobile terminal for searching contact person | |
CN109257649A (en) | A kind of multimedia file producting method and terminal device | |
CN108718389A (en) | A kind of screening-mode selection method and mobile terminal | |
CN108279833A (en) | A kind of reading interactive approach and mobile terminal | |
CN109510897A (en) | A kind of expression picture management method and mobile terminal | |
CN108765522A (en) | A kind of dynamic image generation method and mobile terminal | |
CN109032380A (en) | A kind of character input method and terminal | |
CN109063076A (en) | A kind of Picture Generation Method and mobile terminal | |
CN107832420A (en) | photo management method and mobile terminal | |
CN108882043A (en) | A kind of control method for playing back and mobile terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |