CN108388557A - Message treatment method, device, computer equipment and storage medium - Google Patents
Message treatment method, device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN108388557A CN108388557A CN201810119755.1A CN201810119755A CN108388557A CN 108388557 A CN108388557 A CN 108388557A CN 201810119755 A CN201810119755 A CN 201810119755A CN 108388557 A CN108388557 A CN 108388557A
- Authority
- CN
- China
- Prior art keywords
- expression
- face
- user
- face area
- fusion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 53
- 238000003860 storage Methods 0.000 title claims abstract description 15
- 230000004927 fusion Effects 0.000 claims abstract description 251
- 230000001815 facial effect Effects 0.000 claims description 114
- 238000009826 distribution Methods 0.000 claims description 54
- 230000009471 action Effects 0.000 claims description 23
- 238000004590 computer program Methods 0.000 claims description 23
- 238000012545 processing Methods 0.000 claims description 14
- 230000010354 integration Effects 0.000 claims description 13
- 238000004364 calculation method Methods 0.000 claims description 8
- 239000004744 fabric Substances 0.000 claims description 7
- 230000004044 response Effects 0.000 claims description 7
- 238000010586 diagram Methods 0.000 description 15
- 230000005540 biological transmission Effects 0.000 description 11
- 230000006855 networking Effects 0.000 description 10
- 238000004458 analytical method Methods 0.000 description 7
- 238000007796 conventional method Methods 0.000 description 6
- 238000001514 detection method Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 4
- 238000005520 cutting process Methods 0.000 description 4
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 3
- 206010011469 Crying Diseases 0.000 description 3
- 238000012512 characterization method Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 238000002844 melting Methods 0.000 description 3
- 230000008018 melting Effects 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 239000000872 buffer Substances 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 239000000155 melt Substances 0.000 description 1
- 238000002156 mixing Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/01—Social networking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Business, Economics & Management (AREA)
- Health & Medical Sciences (AREA)
- Marketing (AREA)
- Economics (AREA)
- Human Resources & Organizations (AREA)
- Computing Systems (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Artificial Intelligence (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- General Engineering & Computer Science (AREA)
- Information Transfer Between Computers (AREA)
Abstract
This application involves a kind of message treatment method, device, computer equipment and storage medium, this method includes:Obtain the message content inputted in social session;The message content is analyzed, corresponding keyword is obtained;Determine expression template corresponding with the keyword;Obtain user's figure;Face image in user's figure is fused to the face image in the expression template, obtains fusion expression figure;The fusion expression figure is sent in the social session.The scheme of the application can more accurately transmit information.
Description
Technical field
The present invention relates to field of computer technology, more particularly to a kind of message treatment method, device, computer equipment and
Storage medium.
Background technology
With the rapid development of Internet technology, by sent on line message carry out exchange have become it is non-in daily life
Often important social mode.
In conventional method, user when exchanging, sent out by the expression packet that social platform pre-production can be selected good
It send (for example, various star's expression packets and cartoon animals expression packet etc.).Transmission social platform pre-production is used in conventional method
Good expression packet is the equal of giving tacit consent to image by means of the good third party of social platform pre-production, is thought to transmit sender
The meaning to be expressed.The method that this third party for being confined to social platform making gives tacit consent to image to transmit information, generally can not
It is enough accurately to pass out the information that sender is intended by, cause information to transmit not accurate enough.
Invention content
Based on this, it is necessary to transmit inaccurate problem for conventional method information, provide a kind of message treatment method, dress
It sets, computer equipment and storage medium.
A kind of message treatment method, the method includes:
Obtain the message content inputted in social session;
The message content is analyzed, corresponding keyword is obtained;
Determine expression template corresponding with the keyword;
Obtain user's figure;
Face image in user's figure is fused to the face image in the expression template, obtains fusion expression
Figure;
The fusion expression figure is sent in the social session.
A kind of message processing apparatus, described device include:
Acquisition module, for obtaining the message content inputted in social session;
Template determining module obtains corresponding keyword for analyzing the message content;It determines and the keyword pair
The expression template answered;
User's figure determining module, for obtaining user's figure;
Expression figure generation module is merged, for the face image in user's figure to be fused in the expression template
Face image obtains fusion expression figure;
Sending module, for sending the fusion expression figure in the social session.
A kind of computer equipment, including memory and processor are stored with computer program, the meter in the memory
When calculation machine program is executed by processor so that the processor executes following steps:
Obtain the message content inputted in social session;
The message content is analyzed, corresponding keyword is obtained;
Determine expression template corresponding with the keyword;
Obtain user's figure;
Face image in user's figure is fused to the face image in the expression template, obtains fusion expression
Figure;
The fusion expression figure is sent in the social session.
A kind of storage medium being stored with computer program, when the computer program is executed by processor so that processing
Device executes following steps:
Obtain the message content inputted in social session;
The message content is analyzed, corresponding keyword is obtained;
Determine expression template corresponding with the keyword;
Obtain user's figure;
Face image in user's figure is fused to the face image in the expression template, obtains fusion expression
Figure;
The fusion expression figure is sent in the social session.
Above-mentioned message treatment method, device, computer equipment and storage medium disappear what acquisition inputted in social session
After ceasing content, according to the corresponding keyword of message content, corresponding expression template is determined, the face image in user's figure is merged
Face image into expression template obtains fusion expression figure.It includes in user's figure that user specifies to merge in expression figure
Face feature.For the information conveyed compared to third party's expression packet only including acquiescence in conventional method, table is merged
Feelings figure increases the information content of transmission, and the increased information content is that user specifies, and can embody user's to a certain extent
It is intended to.Therefore, will include that the fusion expression figure of face feature in user's figure that user specifies is sent in social session, energy
It is enough more accurately to transmit information.
Description of the drawings
Fig. 1 is the application scenario diagram of message treatment method in one embodiment;
Fig. 2 is the flow diagram of message treatment method in one embodiment;
Fig. 3 is the interface schematic diagram of the message treatment method in one embodiment;
Fig. 4 to Fig. 5 is the interface schematic diagram that fusion expression figure is obtained in one embodiment;
Fig. 6 is the principle schematic that fusion expression figure is generated in one embodiment;
Fig. 7 is the flow diagram of message treatment method in another embodiment;
Fig. 8 is the block diagram of message processing apparatus in one embodiment;
Fig. 9 is the block diagram of message processing apparatus in another embodiment;
Figure 10 is the block diagram that expression figure generation module is merged in one embodiment;
Figure 11 is the internal structure schematic diagram of one embodiment Computer equipment.
Specific implementation mode
In order to make the purpose , technical scheme and advantage of the present invention be clearer, with reference to the accompanying drawings and embodiments, right
The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and
It is not used in the restriction present invention.
Fig. 1 is the application scenario diagram of message treatment method in one embodiment.Referring to Fig.1, which includes
One terminal 110, server 120 and second terminal 130.First terminal 110 and second terminal 130 pass through with server 120 respectively
Network is communicated.Social session can be established between first terminal 110 and second terminal 130 by server 120 to realize
The transmitting-receiving of message, for example, first terminal 110 can send a message to server 120 by social session, by server 120
Forward the messages to second terminal 130.
First terminal 110 and second terminal 130 can be intelligent TV set, desktop computer or mobile terminal, and movement is eventually
End may include at least one of mobile phone, tablet computer, laptop, personal digital assistant and Wearable etc..Clothes
Business device 120 can be realized with the server cluster of the either multiple physical server compositions of independent server.
It is appreciated that in other embodiments, server can not be passed through between first terminal 110 and second terminal 130
120 carry out the transmitting-receiving of message, but the transmitting-receiving of message can be directly carried out by point-to-point mode.
First terminal 110 can obtain the message content inputted in social session;Message content is analyzed, is obtained corresponding
Keyword;Determine expression template corresponding with keyword;Obtain user's figure;Face image in user's figure is fused to expression mould
Face image in plate obtains fusion expression figure.First terminal 110 can send fusion expression figure in social session.It can be with
Understand, fusion expression figure can be sent to server 120 by first terminal 110 in social session, will be merged by server 120
Expression figure is forwarded to second terminal 130.First terminal 110 will directly can also be melted by point-to-point mode in social session
It closes expression figure and is sent to second terminal 130.
It is appreciated that in order to realize two-way communication, the function of first terminal 110 and second terminal 130 can be interchanged.I.e.
Two terminals 130 can also execute above-mentioned steps to send fusion expression figure to first terminal 110.
Fig. 2 is the flow diagram of message treatment method in one embodiment.The present embodiment is mainly with the message processing
Method is come applied to computer equipment for example, the computer equipment can be first terminal 110 or second terminal in Fig. 1
130.With reference to Fig. 2, this method specifically comprises the following steps:
S202 obtains the message content inputted in social session.
Wherein, social session is the communication interaction process carried out between at least one user object and other user objects.
User object is for indicating user.
In one embodiment, social networking application is installed in computer equipment, can be stepped on by social account between user object
After recording the social networking application, social session is established by the social networking application.Social networking application is to realize the application program of social communication.
In one embodiment, social networking application may include instant messaging application, social networking application and from media platform application etc. in
It is at least one.Instant messaging application may include that (abbreviation of Tencent QQ is a Internet-based of Tencent's exploitation to QQ
Instant communication software) and wechat (WeChat, be Tencent release one provide answering for instant messaging service for intelligent terminal
With program) etc. applications.
It is appreciated that in other embodiments, user object can log in social account to access by computer equipment
The social networking application of webpage version establishes social session.
Specifically, computer equipment can correspond to the social session interface of social session in displaying, and user object can be with
Message content is inputted based on the social activity session interface.It is appreciated that user is based on the social session interface corresponding to social session
The message content of input, the message content as inputted in social session.Computer equipment obtains defeated in the social activity session
The message content entered.
S204 analyzes message content, obtains corresponding keyword.
Wherein, the keyword of message content is to play substantial role to the embodiment of the information expressed by message content
Word.
It is appreciated that the corresponding keyword of message content can be the word that message content itself includes, can also be
To message content carry out semantic analysis determine not included in message content but the theme expressed by message content can be characterized
Word.
Specifically, computer equipment can analyze message content, and keyword is extracted from message content.Computer
Equipment can also carry out semantic analysis to message content, and the master expressed by characterization message content is determined according to semantic analysis result
The word of topic is as keyword corresponding with message content.
Such as, it is assumed that input message content " heartily ", then keyword can be message content " heartily " itself, it can also
It is that semantic analysis is carried out to " heartily ", the word " happy " for the theme expressed by characterization message content determined.
In one embodiment, keywords database is pre-set in computer equipment.Include multiple pre- in keywords database
If keyword.Computer equipment can search the matched keyword of message content with input in keywords database, will search
The matched keyword arrived is as the corresponding keyword of message content.It is appreciated that with the matched keyword of message content, it can be with
Be with message content absolute matches (i.e. with message content completely the same), can also be with message content fuzzy matching (i.e. with disappear
It is inconsistent to cease content, but is greater than or equal to preset matching degree threshold value with the matching degree of message content).Such as, it is assumed that input disappears
It is " heartily " to cease content, preset keyword " heartily " is found in keywords database, then the corresponding keyword of the message content
For " heartily ".Assuming that the message content of input is " heartily ", which " breathes out with preset keyword in keywords database
Breathe out " matching degree is more than preset matching degree threshold value, then and the corresponding keyword of the message content is " heartily ".
In another embodiment, keywords database is pre-set in computer equipment.Computer equipment can be to input
Message content segmented, obtain corresponding words segment;Each word segment that participle is obtained and each keyword in keywords database
It is matched, the keyword that will match to is as the corresponding keyword of the message content.For example, the message content of input is " I
Want to cry ", participle obtains word segment " I ", " thinking " and " crying ", by keywords database comprising keyword " crying ", then will be in the message
It is " crying " to hold " I wants to cry " corresponding keyword.
In yet another embodiment, keywords database, each pass that keywords database includes are pre-set in computer equipment
Corresponding expression theme is provided in keyword.Expression theme is used to characterize the purport information expressed by corresponding keyword.Computer
Equipment can carry out semantic analysis to message content, the corresponding expression theme of the message content be determined, according to the table determined
Up to theme, keyword corresponding with the expression theme determined is searched from keywords database.For example, " I am grinned from ear to early exciting "
Semantic analysis is carried out, the keyword for really going out the corresponding expression theme of characterization is " laughing at ".
It is appreciated that in other embodiments, acquired message content can be sent to server by computer equipment,
The corresponding keyword that the analysis of the reception server feedback message content obtains.
S206 determines expression template corresponding with keyword.
Wherein, expression template is the expression figure of the acquiescence comprising face image.
In one embodiment, the correspondence between keyword and expression template is pre-set in computer equipment,
Wherein, a keyword can correspond at least one expression template.Computer equipment can according to the correspondence, search with
The obtained corresponding expression template of keyword.
In other embodiments, computer equipment can receive the expression template corresponding with keyword of server feedback.
S208 obtains user's figure.
Wherein, Yong Hutu is the picture for including face image that user specifies.It is appreciated that included by user's figure
Face image may include the face image of any object, be not limited to face's figure of the user of input message content itself
Picture.
In one embodiment, computer equipment can obtain pre-set user's figure.Specifically, user can be advance
By computer equipment by setting interface upload customized user and scheme, computer equipment can be pre-set by user
User's figure stores.When needing to carry out expression fusion treatment, computer equipment can obtain pre-set user's figure.
In one embodiment, computer equipment can receive and respond to the operation for obtaining picture, to obtain use
Family figure.Operation for obtaining picture may include Image Acquisition operation or picture upload operation etc..
In one embodiment, computer equipment can receive picture upload operation, according to picture upload operation, selection meter
The local picture for calculating machine equipment is schemed as user.
In one embodiment, step S208 includes:Show image acquisition interface;Reception acts on Image Acquisition interface
Image Acquisition operational order;Image Acquisition is carried out in response to Image Acquisition operational order, obtains user's figure.
Wherein, Image Acquisition interface is the interface for acquiring image.
In one embodiment, computer equipment can in response to Image Acquisition operational order, call local camera into
Row Image Acquisition obtains user's figure.
In above-described embodiment, during social session, displaying image acquisition interface carries out Image Acquisition and obtains user's figure,
The fusion expression figure for scheming to generate according to user is sent in social session, by social session and Image Acquisition and expression
Fusion combines, without the operation that each step is dividually isolated, to improve the efficiency of Message Processing.
Face image in user's figure is fused to the face image in expression template by S210, obtains fusion expression figure.
Wherein, it merges, refers to that will interpenetrate and influence each other between different objects, be allowed to the process to combine together.
Expression figure is merged, is to merge to obtain with the face image in expression template by the face image in user's figure in expression template
Picture.It can also be Dynamic Graph that fusion expression figure, which can be static map,.
It should be noted that the face image in user's figure can be respectively fused in step S206 really by computer equipment
Face image in fixed all or part of expression template obtains merging expression figure accordingly.
Specifically, the face image in user's figure is fused to expression template by computer equipment using expression template as master map
In face image, obtain fusion expression figure.It is appreciated that computer equipment can determine the face image in user's figure,
And determine that the face image in expression template merges the face image in the user's figure determined in expression template
Face image into the expression template determined.Computer equipment can also be distributed the face image cut out in user's figure
With the face image in expression template, the face image in user's figure of cutting is fused to the face in the expression template cut out
Portion's image, and the face image after the fusion in the expression template cut out is spliced into the expression template of institute's Crop, it obtains
To fusion expression figure.
It should be noted that the face image in user's figure can be only fused to the face in expression template by computer equipment
Portion's image obtains fusion expression figure.Computer equipment can also again will be in the image at other at least one positions in user's figure
Appearance is fused in the image of expression template corresponding position, obtains fusion expression figure.
It is appreciated that when the face image in user's figure is only fused to the face image in expression template by computer equipment
When, it is obtained to merge in expression figure in addition to face image is the blending image of the face image in user's figure and expression template
Outside, remaining picture material is by original image for having in expression template.
It should be noted that in order to avoid the wasting of resources, when presence according to expression template corresponding with keyword and is somebody's turn to do
When the user of acquisition schemes the fusion expression figure of the history generated, then basis expression mould corresponding with keyword can be directly acquired
The user of plate and the acquisition schemes the fusion expression figure of the history generated, and redundance is carried out without executing step S210 again
Fusion treatment.
In one embodiment, expression template corresponding with keyword can be multiple, when having portion in multiple expression templates
When point expression template has generated the fusion expression figure of history according to user's figure, computer equipment can directly acquire this and give birth to
At history fusion expression figure.For the expression template for the fusion expression figure for not having corresponding history, computer equipment can be with
It executes step S210 and generates corresponding fusion expression figure, it can also be by the expression template for the fusion expression figure for not having corresponding history
Option be shown, when receive user for displaying expression template option selection operation when, will be in user's figure
Face image is fused to the selection and operates face image in targeted expression template.
In one embodiment, computer equipment can be according to the pixel of each pixel of the face image in expression template
The pixel value of the respective pixel point of face image in value and user's figure carries out fusion calculation linearly or nonlinearly, according to fusion
The pixel value for each pixel being calculated carries out image procossing, obtains fusion expression figure.In one embodiment, computer
Equipment can be according to the corresponding of the face image in the pixel value and user's figure of each pixel of the face image in expression template
The pixel value of pixel is weighted average computation, according to the pixel value of each pixel after weighted average calculation, carries out image
Processing obtains fusion expression figure.
S212 sends fusion expression figure in social session.
In one embodiment, the fusion expression figure that computer equipment can directly transmit in social session.
In one embodiment, the option for the fusion expression figure that computer equipment can be shown, according to what is detected
To the selection operation of the option of the fusion expression figure of displaying, the selection is operated into corresponding fusion expression figure and is sent out in social session
It send.It is appreciated that computer equipment can receive the trigger action for showing more fusion expression figures with exposition
When, further according to corresponding trigger action, the fusion expression figure not shown in obtained fusion expression figure is shown.
In one embodiment, computer equipment can obtain the expression template corresponding to the fusion expression figure each obtained
Hot value, the option for merging expression figure is arranged according to the hot value sequence from high to low of corresponding expression template
Sequence.It is appreciated that computer equipment can show fusion expression of the hot value in preceding presetting digit capacity of corresponding expression template
The option of figure.
It is appreciated that computer equipment can only send fusion expression figure in social session, can also be merged sending
Except expression figure, acquired message content is sent.In one embodiment, computer equipment can will merge expression figure and obtain
The message content association taken is sent.Association transmission may include the message content will merge expression figure and acquisition with a piece of news
The form of content is sent.
Fig. 3 is the interface schematic diagram of the message treatment method in one embodiment.With reference to Fig. 3, user " small B " and user
" small A " establishes social session by the computer equipment respectively used, and the message content of user " small B " input is " heartily ", meter
Above-mentioned steps generation fusion expression Fig. 1~3 can be executed according to message content " heartily " by calculating machine equipment, and show that is generated melts
The option of expression figure is closed, after user " small B " selects fusion expression Fig. 1, then computer equipment can select fusion expression Fig. 1
In, and sent in the social session carried out with the terminal used in user " small A " and merge expression Fig. 1.
In one embodiment, computer equipment is sent to server by social session by expression figure is merged, and makes service
The fusion expression figure is forwarded to the equipment for establishing social session with computer equipment by device.Such as, it is assumed that computer equipment
One terminal, second terminal establish social session with first terminal, then first terminal can will merge expression figure by social session
It is sent to server, makes server that the fusion expression figure is forwarded to second terminal.
In one embodiment, computer equipment can will directly be melted by point-to-point mode in social session
It closes expression figure and is sent to the equipment for establishing social session with the computer equipment.Similarly, it is assumed that computer equipment is first whole
End, second terminal establish social session with first terminal, then first terminal can be by point-to-point mode in social session
Fusion expression figure is directly sent to second terminal.
Above-mentioned message treatment method, it is corresponding according to message content after obtaining the message content inputted in social session
Keyword, determine corresponding expression template, the face image in user's figure be fused to the face image in expression template, is obtained
To fusion expression figure.It include the face feature in user's figure that user specifies in fusion expression figure.Compared to conventional method
In include only acquiescence the information conveyed of third party's expression packet for, fusion expression figure increases the information content of transmission, and
The increased information content is that user specifies, and can embody the intention of user to a certain extent.Therefore, will include that user specifies
User's figure in the fusion expression figure of face feature sent in social session, can more accurately transmit information.
In one embodiment, expression template corresponding with keyword is at least one.This method further includes:Displaying and pass
The option of the corresponding at least one expression template of keyword;When the first choice for the option for detecting the expression template for displaying is grasped
When making, expression template is chosen according to first choice operation.Step S210 includes:Face image in user's figure is fused to and is chosen
Expression template face image in, obtain fusion expression figure.
It is appreciated that the option of expression template, corresponds to expression module, for receiving the choosing to corresponding expression template
Select operation.In one embodiment, the option of expression template can be expression template thumbnail.
Specifically, computer equipment is searched and is determined according to the correspondence between preset keyword and expression template
The corresponding at least one expression template of keyword, and show the option of corresponding with keyword at least one expression template.With
Family can carry out first choice operation to the option of the expression template of displaying, therefrom select the expression template being desirable for.It calculates
Machine equipment can be chosen when detection operates the first choice of the option of the expression template of displaying according to first choice operation
Corresponding expression template.Face image in user's figure can be fused to the face's figure for the expression template chosen by computer equipment
As in, fusion expression figure is obtained.
It is appreciated that computer equipment can be corresponding with keyword by what is determined in step S206 on display interface
The option of at least one expression template be shown.It should be noted that the expression mould that computer equipment can will be determined
Plate carries out whole displayings, or the expression template determined is carried out part displaying by saving spacial flex.
In one embodiment, computer equipment can obtain the hot value for the expression template each determined, by temperature
Value ranking is shown in the option of the expression template of preceding presetting digit capacity.It is appreciated that in the table of computer equipment exposition
When the option of feelings template, when receiving for showing the more trigger action of multiple expression template, further according to corresponding trigger action,
The expression template not shown in the expression template determined is shown.
In above-described embodiment, the option of expression template corresponding with keyword is shown;When detect for displaying
Expression template option first choice operation when, according to first choice operation choose expression template;By the face in user's figure
In portion's image co-registration to the face image for the expression template chosen, fusion expression figure is obtained.The expression mould only specified according to user
Plate carries out expression fusion treatment, avoids the waste of resource.
In one embodiment, this method further includes:The history of inquiry basis expression template generation corresponding with keyword
Fusion expression figure;Show the option of the fusion expression figure of the history inquired;When the fusion for detecting the history for displaying
When the second selection operation of the option of expression figure, the fusion expression figure chosen according to the second selection operation;It is sent out in social session
Fusion expression figure in sending to be elected.
Wherein, the option of expression figure is merged, expression module is corresponded to, for receiving the choosing to corresponding fusion expression figure
Select operation.In one embodiment, the option of fusion expression figure can be fusion expression figure thumbnail.The fusion expression of history
Figure, is the fusion expression figure generated.
Specifically, computer equipment is other than showing the option of at least one expression template corresponding with keyword, also
The fusion expression figure that the history according to expression template generation corresponding with keyword can be inquired, shows melting for the history inquired
Close the option of expression figure.When the second selection operation of the option for detecting the fusion expression figure for the history of displaying, calculate
The fusion expression figure for the history that machine equipment is chosen according to the second selection operation;The fusion for the history chosen is sent in social session
Expression figure.
In one embodiment, computer equipment can be from middle inquiry be locally stored according to expression mould corresponding with keyword
The fusion expression figure for the history that plate generates, can also receive the basis inquired the expression corresponding with keyword of server feedback
The fusion expression figure of the history of template generation.
In above-described embodiment, the fusion expression figure of corresponding history is shown, since the fusion expression figure of history is
It has been generated that, the demand of user may be met to a certain extent, so the fusion expression figure of corresponding history is carried out
Displaying, plays the accurate transmission of message certain effect.
In one embodiment, step S208 includes:Access the webpage for generating fusion expression figure;Detection is for webpage
Picture upload interface trigger action;User's figure is obtained according to trigger action.Step S210 includes:By webpage to server
Upload user figure and the mark for sending expression template;Receive the fusion expression figure of server feedback;The fusion expression figure of feedback is
Face image face image in user's figure being fused in the expression template corresponding to the mark of expression template obtains.
In one embodiment, the webpage for generating fusion expression figure can be HTML5 (the 5th edition hypertext markup language
Speech) webpage.It is appreciated that can also be the webpage of extended formatting for generating the webpage of fusion expression figure, this is not limited.
The webpage is provided with picture and uploads interface.It is the interface for uploading pictures that picture, which uploads interface,.
In one embodiment, obtaining user's figure according to trigger action includes:Local picture is selected to make according to trigger action
Scheme for user.In one embodiment, obtaining user's figure according to trigger action includes:Image Acquisition is carried out according to trigger action to obtain
Scheme to user.
Specifically, computer equipment can merge expression by the browser access that call operation system is arranged for generating
The webpage of figure can also merge expression figure by the browser access that locally-installed social networking application integrates itself for generating
Webpage.Computer equipment can show the webpage for generating fusion expression figure.User can be directed to for generating fusion
Picture in the webpage of expression figure uploads interface and carries out trigger action, and computer equipment is detecting the picture upload for webpage
When the trigger action of interface, local picture is selected as user according to trigger action and schemes or camera is called to carry out Image Acquisition
Obtain user's figure.Computer equipment can upload the mark that selected user schemes and sends expression template by webpage to server
Know.Server can determine the expression template corresponding to the mark of received expression template, by the face image in user's figure
Face image in expression template determined by being fused to obtains fusion expression figure.The fusion expression that server can will obtain
Figure feeds back to computer equipment.
Fig. 4 to Fig. 5 is the interface schematic diagram that fusion expression figure is obtained in one embodiment.With reference to Fig. 4, user is to being shown
For generates fusion expression figure webpage in picture upload interface " being immediately generated " progress trigger action, computer equipment is then
It can be taken pictures according to trigger action or selection obtains user Figure 40 2 from local picture, and obtained user Figure 40 2 is uploaded
To server, makes server that will carry out face's fusion treatment according to user Figure 40 2 and expression template 404 and merge expression figure, and will
Obtained fusion expression figure feeds back to computer equipment.502 in Fig. 5 server feedbacks received by computer equipment
Merge expression figure.
In above-described embodiment, by accession page come upload user figure to server, obtain that server returns according to
The fusion expression figure that family figure and expression template obtain, without doing the improvement in terms of user's figure upload to social networking application itself,
For accessing webpage compared to being improved to social networking application itself, reduces cost and improve efficiency.
In one embodiment, expression template includes the continuous expression figure of multiframe;Step S210 includes:It will be in user's figure
Face image be fused to the face image in every frame expression figure respectively, obtain multiframe and continuously merge expression figure;Multiframe is connected
Continuous fusion expression figure synthesizes fusion expression Dynamic Graph.
Specifically, the face image in user's figure can be fused to the face in every frame expression figure respectively by computer equipment
Image obtains multiframe and continuously merges expression figure.Computer equipment can continuously be melted multiframe by the format specification of Dynamic Graph
It closes expression figure and synthesizes fusion expression Dynamic Graph.In one embodiment, fusion expression Dynamic Graph can be GIF (Graphics
Interchange Format) format Dynamic Graph.
In one embodiment, computer equipment fusion expression figure can will be compressed per frame using image procossing library,
Color depth every frame to be merged to expression figure, which is reduced to, meets Dynamic Graph color depth standard, will be more after reduction color depth
Frame continuously merges expression figure and synthesizes fusion expression Dynamic Graph.In one embodiment, computer equipment can pass through image
Multiframe after reduction color depth is continuously merged expression figure and synthesizes fusion expression Dynamic Graph by synthesis program.
Wherein, color depth refers in field of Computer Graphics, and expression stores in bitmap or video frame buffers
Digit used in the color of one pixel.Dynamic Graph color depth standard is the color depth that Dynamic Graph is supported.Implement at one
In example, image procossing library can be OpenCV (Open Source Computer Vision Library).OpenCV is one
Cross-platform computer vision library based on BSD licenses (increasing income) distribution.In one embodiment, image synthesis program can be
ImageMagick (a kind of free image processing program).
In one embodiment, the color depth of every frame fusion expression figure can be reduced to 8 (i.e. by computer equipment
256 colors).
In above-described embodiment, multiframe is continuously merged to expression figure and synthesizes fusion expression Dynamic Graph, passes through Dynamic Graph
Form can more accurately transmit information into the transmission of row information compared to static fusion expression figure.
In one embodiment, step S210 includes:Determine the first face area in user's figure;In every frame expression figure
Determine the second face area;Respectively according to the distributing position of facial feature points in the second face area of every frame expression figure, accordingly
Adjust the distributing position of facial feature points in the first face area of user's figure;In the face of the second face area of every frame expression figure
In portion's image, the face image of the first face area in the user's figure accordingly adjusted is merged respectively, obtains fusion expression figure.
Wherein, face area is the region for including face image.It is appreciated that face area can also include removing face
Region except image, such as hair or cap etc..Facial feature points are the points for identifying face feature position.For example pass through 83
A facial feature points identify the characteristic portion in face.Facial feature points include the characteristic portions such as mark eyes, nose, face
Point.The distributing position of facial feature points is the position that facial feature points are distributed in face area plane.It can manage
Solution, the position that facial feature points are distributed, the region that the face image in face area is covered.
Specifically, computer equipment can determine the first face area in user's figure, and in every frame expression figure really
Fixed second face area.Computer equipment can be respectively according to point of facial feature points in the second face area of every frame expression figure
Cloth position accordingly adjusts the distributing position of facial feature points in the first face area of user's figure so that the user accordingly adjusted
The distributing position of same facial feature points is consistent in second face area of the first face area of figure and corresponding expression figure.
In one embodiment, the size of the first face area and the second face area matches, and ensures by the first face area and
The fusion expression figure merged after the distributing position of same facial feature points in two face areas is not excessively distorted and is protected
Hold the face feature in user's figure.
In one embodiment, the distributing position of facial feature points can be using face area as two-dimensional coordinate plane
In the two-dimensional coordinate system of foundation, position coordinates that facial feature points are located in this two-dimensional coordinate system.
For example, have the characteristic point 1,2 and 3 for indicating eyes, nose and face respectively in the first face area in user's figure,
All there is the characteristic point 1,2 and 3 for indicating eyes, nose and face respectively in the second face area per frame expression figure, but it is each
The distributing position of same characteristic point 1 in second face area can be different.It is assumed that face in the second face area of expression figure A
The distributing position of portion's characteristic point 1 is (1,2), in the second face area of expression figure B the distributing position of facial feature points 1 be (2,
2), the distributing position of facial feature points 1 is (1.5,2) in the first face area in user's figure, then computer equipment can divide
Not according to the second facial regions of the distributing position (1,2) and expression figure B of facial feature points 1 in the first face area of expression figure A
The distributing position (2,2) of facial feature points 1 in domain, by the distribution position of facial feature points 1 in the first face area in user's figure
(1.5,2) are set to adjust accordingly.Computer equipment can be by point of facial feature points 1 in the first face area in user's figure
Cloth position (1.5,2) adjusts separately as (1,2) and (2,2).
It is appreciated that computer equipment can also be by the distributing position of facial feature points in the first face area of user's figure
It is adjusted accordingly together with the distributing position of facial feature points in the second face area of every frame expression figure so that after adjustment
The distributing position of same facial feature points is consistent in corresponding first face area and the second face area.
In one embodiment, determine that the first face area in user's figure includes:From cutting out in user's figure
First face area;Determine that the second face area includes in every frame expression figure:The second face is cut out from every frame expression figure
Portion region.In the present embodiment, in the face image of the second face area of every frame expression figure, the use accordingly adjusted is merged respectively
The face image of first face area in the figure of family, obtaining fusion expression figure includes:In the face image of each second face area
In, the face image of the first face area accordingly adjusted is merged respectively, by the second of each face image including after fusion
Face area splices to corresponding expression figure, obtains fusion expression figure.
It is appreciated that corresponding expression figure, as the expression figure of the second face area institute Crop.
In above-described embodiment, before the face image of user's figure is fused to expression figure, according to the second face of expression figure
The distributing position of facial feature points in portion region accordingly adjusts the distribution position of facial feature points in the first face area of user's figure
It sets, is equivalent to and deformation process is carried out to the face image in user's figure so that same face is special in the face image in user's figure
The distributing position of sign point matches with the distributing position in expression figure, so that the face image after fusion avoids torsion
Bent, deformity situation, can more retain the face feature in user's figure, to improve the accuracy of fusion expression figure with
And improve the accuracy of information transmission.
In one embodiment, respectively according to the position of facial feature points in the second face area of every frame expression figure, phase
The position that facial feature points in the first face area of user's figure should be adjusted includes:Face in each second face area is determined respectively
Corresponding second distributing position of characteristic point;In the first face area, corresponding first distributing position of facial feature points is determined;Root
According to default deformation parameter, the second distributing position in corresponding each second face area and the in the first face area
One distributing position determines the corresponding target distribution position of each facial feature points;Respectively according to corresponding each target distribution position, adjust
Second distributing position of the whole facial feature points in each corresponding second face area and corresponding first in the first face area
Distributing position.
Wherein, target distribution position is located at needed for the facial feature points in the first face area and the second face area
Position.In one embodiment, target distribution position is between corresponding first distributing position and corresponding second distributing position.
It will be seen that being adjusted, being equivalent to the distributing position of the facial feature points in the first and second face areas
It is that deformation process is carried out to the first and second face areas.Deformation parameter is the preset parameter for carrying out deformation process.Deformation is joined
Number, the degree for specifying the first face area to be deformed upon to the second face area, and specified second face area are to the
The degree that one face area deforms upon.
Specifically, computer equipment can determine corresponding second distribution of facial feature points in each second face area respectively
Position, and in the first face area, determine corresponding first distributing position of facial feature points.Computer equipment can basis
Default deformation parameter, each facial feature points are located at the second distributing position in corresponding each second face area and are located at the first face
The first distributing position in region determines each corresponding target distribution position of facial feature points respectively.Computer equipment can divide
Not according to corresponding each target distribution position, second distributing position of the adjustment facial feature points in each corresponding second face area
With corresponding first distributing position in the first face area.
In one embodiment, according to default deformation parameter, the second distribution position in corresponding each second face area
It sets and the first distributing position in the first face area, determines that the corresponding target distribution position of each facial feature points includes:
According to the facial feature points in the first face area by the first face area carry out triangulation, obtain it is multiple adjacent and with
Facial feature points in first face area are first triangle on vertex;For each second face area, according to the second face
Second face area is carried out triangulation by facial feature points in portion region, is obtained multiple adjacent and with the second facial regions
Facial feature points in domain are second triangle on vertex;For each second triangle, according to default deformation parameter, the two or three
The second distributing position and the facial feature points of the angular middle facial feature points as vertex are being used as top by the facial feature points
The first distributing position in first triangle of point, carries out triangle affine deformation processing, and mapping obtains and each facial feature points
Corresponding target distribution position.
Wherein, triangle affine deformation is to be mapped to obtain the place of target vertex of a triangle according to source vertex of a triangle
Reason process.In the present embodiment, triangle affine deformation is the equal of using the second triangle and corresponding first triangle as source three
Angular, mapping obtains the processing procedure of target triangle.
It is appreciated that computer equipment can be according to each after being adjusted by respective objects distributing position in the first face area
Facial feature points determine the face image in the first face area after adjustment;And it is corresponding according to being pressed in the second face area
Each facial feature points after the adjustment of target distribution position, determine the face image in the second face area after adjustment.
In one embodiment, computer equipment can be adjusted according to respective objects distributing position is pressed in the first face area
Each facial feature points afterwards carry out line according to the first triangle corresponding before adjustment, obtain multiple adjacent first objects
Triangle determines the face in the first face area after adjustment according to the overlay area of obtained multiple first object triangles
Portion's image;According to each facial feature points after respective objects distributing position adjusts are pressed in the second face area, according to institute before adjustment
Corresponding second triangle carries out line, multiple the second adjacent target triangles is obtained, according to obtained multiple second targets
The overlay area of triangle determines the face image in the second face area after adjustment.
In one embodiment, the distributing position of facial feature points can be using face area as two-dimensional coordinate plane
In the two-dimensional coordinate system of foundation, position coordinates that facial feature points are located in this two-dimensional coordinate system.
For example, position of the facial feature points 1 in the first face area is (3,2), in the second face corresponding with expression figure A
Position in a of portion region is (4,2), and the position in the second face area b corresponding with expression figure B is (5,3).In conjunction with default shape
Variable element determines face according to the position (3,2) in the first face area and the position (4,2) in the second face area a
The target distribution position of characteristic point 1 is (3.3,2), then computer equipment can be respectively by the position in the first face area
(3,2) and the position (4,2) in the second face area a, is adjusted to target distribution position (3.3,2).According in the first face
Position (3,2) in region and the position (5,3) in the second face area b, determine the target distribution position of facial feature points 1
For (3.6,2.3), then computer equipment can respectively by the first face area position (3,2) and in the second face area
Position (5,3) in b, is adjusted to target distribution position (3.6,2.3).
In the present embodiment, in the face image of the second face area of every frame expression figure, merges accordingly adjust respectively
The face image of first face area in user's figure, obtaining fusion expression figure includes:Second after the adjustment of every frame expression figure
In the face image of face area, the face image of the first face area in the user's figure accordingly adjusted is merged, fusion table is obtained
Feelings figure.
Wherein, the face image of the first face area in the user's figure accordingly adjusted refers to according to same target distribution position
Set the face image of the first face area in the user's figure being adjusted.
It is appreciated that when cutting out the second face area in from user's figure and per frame expression figure, computer equipment can be with
In the face image of the second face area after each adjustment, face's figure of the first face area accordingly adjusted is merged respectively
Picture splices each the second face area including the face image after fusion to corresponding expression figure, obtains fusion expression figure.
In one embodiment, in the second face area that computer equipment can be after the adjustment, by first after adjustment
Respective pixel point in the pixel value of each pixel and the face image of the second face area of the conjunction face image of face area
Pixel value is weighted average computation, according to the pixel value of each pixel after weighted average calculation, carries out image procossing and obtains
Merge expression figure.
In above-described embodiment, according to default deformation parameter, the second distributing position in corresponding each second face area
With the first distributing position in the first face area, the corresponding target distribution position of each facial feature points is determined;It presses respectively
According to corresponding each target distribution position, second distributing position of the adjustment facial feature points in each corresponding second face area and
Corresponding first distributing position in first face area.It is equivalent to by adjusting by the face feature of the first face area and
The face feature of two face areas is close to other side respectively so that the fusion expression figure of generation had both remained user's figure and expression figure
In Partial Feature, and have itself unique face feature, be equivalent to create one with user scheme and expression figure face
Feature has associated new role, while ensureing the accuracy of information transmission, further increases the information content of transmission.
In one embodiment, this method further includes:Obtain mask figure corresponding with each expression figure respectively;By each mask figure
In facial feature points distributing position, be adjusted by corresponding target distribution position, corresponding target distribution position, be face
The target that portion's characteristic point is determined according to the second distributing position in the second face area of the expression figure corresponding to mask figure
Distributing position.
In the present embodiment, in the face image of the second face area after the adjustment of every frame expression figure, corresponding adjust is merged
The face image of first face area in whole user's figure, obtaining fusion expression figure includes:By the mask after corresponding adjustment
Specified integration region in figure determines second face's figure to be fused in the second face area after the adjustment of every frame expression figure
Picture, and the first face image to be fused is determined in the first face area accordingly adjusted;After the adjustment of every frame expression figure
In the second face image to be fused in second face area, first in the first face area that fusion accordingly adjusts is to be fused
Face image obtains fusion expression figure.
Wherein, mask figure (mask), for the specified region for needing to carry out face's fusion treatment.Mask figure includes specified
Integration region.The specified integration region is the region for needing to carry out face's fusion treatment specified.It is appreciated that in mask figure
Specified is the region for needing to carry out face's fusion treatment, so mask figure includes facial feature points.
It specifically, can be from each second face area pair before obtaining and adjusting in local or server in computer equipment
The mask figure that should be configured.The specified integration region that mask figure includes and the face in the second face area before corresponding adjustment
Image-region matches.It is appreciated that since expression template is the second face that is pre-set, then being cut out from expression template
The size in portion region can also predefine, it is possible to setting mask figure corresponding with the second face area.
Computer equipment can be by the distributing position of the facial feature points in each mask figure, by corresponding target distribution position
It is adjusted, wherein corresponding target distribution position, for the second face area according to facial feature points corresponding to mask figure
In the target distribution position determined of the second distributing position.The facial feature points being appreciated that in each mask figure after adjustment
Distributing position, in corresponding second face area adjust after the second distributing position it is consistent.
Computer equipment can be by the specified integration region in the mask figure after corresponding adjustment, in every frame expression figure
The second face image to be fused is determined in the second face area after adjustment, and is determined in the first face area accordingly adjusted
First face image to be fused.It is appreciated that distributed areas and the first face image to be fused of the second face image to be fused
Distributed areas matched with the specified integration region in corresponding mask figure.Computer equipment can be after the adjustment of every frame expression figure
The second face area in the second face image to be fused in, what is merged in the first face area for accordingly adjusting first waits melting
Face image is closed, fusion expression figure is obtained.
In one embodiment, in the second face area that computer equipment can be after the adjustment, according to after adjustment
The pixel value of each pixel of first face image to be fused of one face area and the second face to be fused of the second face area
The pixel value of respective pixel point carries out fusion calculation linearly or nonlinearly, each pixel obtained according to fusion calculation in portion's image
The pixel value of point carries out image procossing, obtains fusion expression figure.
In one embodiment, computer equipment can be according to the first face to be fused of the first face area after adjustment
The pixel value of respective pixel point in the pixel value of each pixel of image and the second face image to be fused of the second face area
It is weighted average computation, according to the pixel value of each pixel after weighted average calculation, image procossing is carried out and obtains fusion table
Feelings figure.
It is appreciated that when cutting out the second face area in from user's figure and per frame expression figure, computer equipment can be with
In the second face image to be fused in the second face area after each adjustment, the first face area accordingly adjusted is merged
In the first face image to be fused, by it is each include that the second face area of face image after fusion splices to corresponding table
Feelings figure (i.e. the expression figure of institute's Crop) obtains fusion expression figure.
Fig. 6 is the principle schematic that fusion expression figure is generated in one embodiment.With reference to Fig. 6, computer equipment is to user
Figure and Prototype drawing (the expression figure i.e. in expression template) are cut, and the first face area 602 and the second face area are obtained
604, and obtain mask Figure 60 corresponding with the second face area 604 6.Mask Figure 60 6 includes specified integration region, i.e. Fig. 6
In black region.Computer equipment can be according to preset deformation parameter, to the first face area 602, the second face area
604 and mask Figure 60 6 carries out deformation adjustment so that same target feature point is in the first face area 602, the second face area
Position consistency in 604 and mask Figure 60 6, with to prepare when subsequent face's fusion treatment.Referring to Fig. 6 it is found that second
The specified integration region in face image and mask Figure 60 6 in face area 604 is equivalent to the face into the first face area 602
Portion's image has developed close — that is, face narrows, and the face image in the first face area 602 is to a certain extent also toward the second face
Face image has developed close in portion region 604 — that is, face slightly broadens.Computer equipment includes according to mask Figure 60 6
Specified integration region, the face in the first face area is merged in the face image in the second face area 604 after the adjustment
Portion's image, obtain include fusion after face image the second face area 608.Computer equipment will be including the face after fusion
Second face area 608 of image is pasted back and (is spliced) to corresponding expression figure, and fusion expression Figure 61 0 is obtained.
In above-described embodiment, according to the specified region for needing the face image merged to be located at of mask figure, it is extra to avoid
The wasting of resources caused by fusion, and excessive fusion is avoided to cause to obscure, to lead to the unconspicuous problem of feature, to improve
The accuracy that the accuracy and information of fusion expression figure are transmitted.
As shown in fig. 7, in one embodiment, providing another message treatment method, this method specifically includes following
Step:
S702 obtains the message content inputted in social session;Message content is analyzed, corresponding keyword is obtained;Really
Fixed expression template corresponding with keyword.
In one embodiment, this method further includes:The option of displaying at least one expression template corresponding with keyword;
When the operation of the first choice for the option for detecting the expression template for displaying, expression mould is chosen according to first choice operation
Plate.
S704 shows image acquisition interface;Receive the Image Acquisition operational order for acting on Image Acquisition interface;In response to
Image Acquisition operational order carries out Image Acquisition, obtains user's figure.
S706 determines the first face area in user's figure;The second face is determined in every frame expression figure in expression template
Portion region;Corresponding second distributing position of facial feature points in each second face area is determined respectively;In the first face area,
Determine corresponding first distributing position of facial feature points.
In one embodiment, determine that the second face area includes in every frame expression figure in expression template:Specified
Expression template in every frame expression figure in determine the second face area.
S708, according to default deformation parameter, the second distributing position in corresponding each second face area and positioned at the
The first distributing position in one face area determines the corresponding target distribution position of each facial feature points.
S710, respectively according to corresponding each target distribution position, adjustment facial feature points are in each corresponding second face area
In the second distributing position and corresponding first distributing position in the first face area.
S712 obtains mask figure corresponding with each second face area before adjustment respectively;By the face in each mask figure
The distributing position of characteristic point is adjusted by corresponding target distribution position;Corresponding target distribution position is facial feature points
The target distribution position determined according to the second distributing position in the second face area corresponding to mask figure.
S714, by the specified integration region in the mask figure after corresponding adjustment, after the adjustment of every frame expression figure
The second face image to be fused is determined in second face area, and determines that first waits melting in the first face area accordingly adjusted
Close face image.
S716, in the second face image to be fused in the second face area after the adjustment of every frame expression figure, fusion
The first face image to be fused in the first face area accordingly adjusted obtains merging expression figure accordingly.
S718, the fusion expression figure that will be obtained according to every frame expression figure in expression template synthesize fusion expression dynamic
Figure;The option for showing obtained fusion expression Dynamic Graph, according to the option of the fusion expression Dynamic Graph to displaying detected
The selection is operated corresponding fusion expression Dynamic Graph and is sent in social session by selection operation.
S720, the fusion expression Dynamic Graph of the history of inquiry basis expression template generation corresponding with keyword;Displaying is looked into
The option of the fusion expression Dynamic Graph for the history ask.
S722, when the second selection operation of the option for detecting the fusion expression Dynamic Graph for the history of displaying, root
The fusion expression Dynamic Graph chosen according to the second selection operation;The fusion expression Dynamic Graph chosen is sent in social session.
Above-mentioned message treatment method, device, computer equipment and storage medium disappear what acquisition inputted in social session
After ceasing content, according to the corresponding keyword of message content, corresponding expression template is determined, the face image in user's figure is merged
Face image into expression template obtains fusion expression figure.It includes in user's figure that user specifies to merge in expression figure
Face feature.For the information conveyed compared to third party's expression packet only including acquiescence in conventional method, table is merged
Feelings figure increases the information content of transmission, and the increased information content is that user specifies, and can embody user's to a certain extent
It is intended to.Therefore, will include that the fusion expression figure of face feature in user's figure that user specifies is sent in social session, energy
It is enough more accurately to transmit information.
As shown in figure 8, in one embodiment, providing a kind of message processing apparatus 800, which includes:Obtain mould
Block 802, template determining module 804, user's figure determining module 806, fusion expression figure generation module 808 and sending module 810,
Wherein:
Acquisition module 802, for obtaining the message content inputted in social session.
Template determining module 804 obtains corresponding keyword for analyzing message content;Determination is corresponding with keyword
Expression template.
User's figure determining module 806, for obtaining user's figure.
Merge expression figure generation module 808, the face for being fused to the face image in user's figure in expression template
Image obtains fusion expression figure.
Sending module 810, for sending fusion expression figure in social session.
In one embodiment, expression template corresponding with keyword is at least one.As shown in figure 9, device 800 also wraps
It includes:
Display module 805, the option for showing at least one expression template corresponding with keyword;When detect for
When the first choice operation of the option of the expression template of displaying, expression template is chosen according to first choice operation;
Fusion expression figure generation module 808 is additionally operable to for the face image in user's figure to be fused to the expression template chosen
In face image, fusion expression figure is obtained.
In one embodiment, acquisition module 802 is additionally operable to inquiry according to expression template generation corresponding with keyword
The fusion expression figure of history;
Display module 805 is additionally operable to the option of the fusion expression figure for the history that displaying inquires;When detect for displaying
History fusion expression figure option the second selection operation when, the fusion expression figure chosen according to the second selection operation;
Sending module 810 is additionally operable to send the fusion expression figure chosen in social session.
In one embodiment, user's figure determining module 806 is additionally operable to displaying image acquisition interface;Reception acts on image
The Image Acquisition operational order of acquisition interface;Image Acquisition is carried out in response to Image Acquisition operational order, obtains user's figure.
In one embodiment, user's figure determining module 806 is additionally operable to access the webpage for generating fusion expression figure;Inspection
Survey the trigger action that interface is uploaded for the picture of webpage;User's figure is obtained according to trigger action;
Fusion expression figure generation module 808 is additionally operable to through webpage to server upload user figure and sends expression template
Mark;Receive the fusion expression figure of server feedback;The fusion expression figure of feedback, is to be fused to the face image in user's figure
The face image in expression template corresponding to the mark of expression template obtains.
In one embodiment, expression template includes the continuous expression figure of multiframe;Merge expression figure generation module 808 also
Face image for being fused to the face image in user's figure respectively in every frame expression figure, obtains multiframe continuously fusion table
Feelings figure;Multiframe is continuously merged to expression figure and synthesizes fusion expression Dynamic Graph.
As shown in Figure 10, in one embodiment, fusion expression figure generation module 808 includes:
Face area determining module 808a, for determining the first face area in user's figure;In every frame expression figure really
Fixed second face area;
Position adjusting type modules 808b, for respectively according to point of facial feature points in the second face area of every frame expression figure
Cloth position accordingly adjusts the distributing position of facial feature points in the first face area of user's figure;
Fusion Module 808c, in the face image of the second face area of every frame expression figure, fusion to be corresponding respectively
The face image of first face area in user's figure of adjustment obtains fusion expression figure.
In one embodiment, position adjusting type modules 808b is additionally operable to determine face feature in each second face area respectively
Corresponding second distributing position of point;In the first face area, corresponding first distributing position of facial feature points is determined;According to pre-
If deformation parameter, the second distributing position in corresponding each second face area and first point in the first face area
Cloth position determines the corresponding target distribution position of each facial feature points;Respectively according to corresponding each target distribution position, face is adjusted
Second distributing position of portion's characteristic point in each corresponding second face area and corresponding first distribution in the first face area
Position;
Fusion Module 808c is additionally operable in the face image of the second face area after the adjustment of every frame expression figure, fusion
The face image of first face area in the user's figure accordingly adjusted obtains fusion expression figure.
In one embodiment, position adjusting type modules 808b be additionally operable to obtain respectively with each second face area before adjustment
Corresponding mask figure;By the distributing position of the facial feature points in each mask figure, it is adjusted by corresponding target distribution position;
Corresponding target distribution position is facial feature points according to the second distribution position in the second face area corresponding to mask figure
Set the target distribution position determined;Fusion Module 808c is additionally operable to by the specified fusion in the mask figure after corresponding adjustment
Region determines the second face image to be fused in the second face area after the adjustment of every frame expression figure, and is accordingly adjusting
The first face area in determine the first face image to be fused;In the second face area after the adjustment of every frame expression figure
In second face image to be fused, the first face image to be fused in the first face area accordingly adjusted is merged, is melted
Close expression figure.
Figure 11 is the internal structure schematic diagram of one embodiment Computer equipment.Referring to Fig.1 1, which can
To be first terminal or second terminal shown in Fig. 1, which includes the processor connected by system bus, deposits
Reservoir, network interface, display screen and input unit.Wherein, memory includes non-volatile memory medium and built-in storage.The meter
The non-volatile memory medium for calculating machine equipment can storage program area and computer program.The computer program is performed, can
So that processor executes a kind of message treatment method.The processor of the computer equipment is for providing calculating and control ability, branch
Support the operation of entire computer equipment.Computer program can be stored in the built-in storage, which is held by processor
When row, processor may make to execute a kind of message treatment method.The network interface of computer equipment is for carrying out network communication.Meter
The display screen for calculating machine equipment can be liquid crystal display or electric ink display screen etc..The input unit of computer equipment can be with
It is the touch layer covered on display screen, can also be the button being arranged in terminal enclosure, trace ball or Trackpad, can also be outer
Keyboard, Trackpad or mouse for connecing etc..The computer equipment can be personal computer, mobile terminal or mobile unit, movement
Terminal includes at least one of mobile phone, tablet computer, personal digital assistant or wearable device etc..
It will be understood by those skilled in the art that structure shown in Figure 11, only with the relevant part of application scheme
The block diagram of structure, does not constitute the restriction for the computer equipment being applied thereon to application scheme, and specific computer is set
Standby may include either combining certain components than more or fewer components as shown in the figure or being arranged with different components.
In one embodiment, message processing apparatus provided by the present application can be implemented as a kind of shape of computer program
Formula, computer program can be run on computer equipment as shown in figure 11, and the non-volatile memory medium of computer equipment can
Storage forms each program module of the message processing apparatus, for example, acquisition module shown in Fig. 8 802, template determining module
804, user's figure determining module 806, fusion expression figure generation module 808 and sending module 810.Each program module is formed
Computer program be used to that the computer equipment to be made to execute the Message Processing of each embodiment of the application described in this specification
Step in method, for example, computer equipment can pass through the acquisition module 802 in message processing apparatus 800 as shown in Figure 8
The message content inputted in social session is obtained, and message content is analyzed by template determining module 804, is closed accordingly
Keyword;Determine expression template corresponding with keyword.Computer equipment can be by user's figure determining module 806 in acquisition user
Figure, and the face image in user's figure is fused to the face image in expression template by merging expression figure generation module 808,
Obtain fusion expression figure.Computer equipment can send fusion expression figure by sending module 810 in social session.
In one embodiment, a kind of computer equipment, including memory and processor are provided, is stored in memory
Computer program, when computer program is executed by processor so that processor executes following steps:It obtains defeated in social session
The message content entered;Message content is analyzed, corresponding keyword is obtained;Determine expression template corresponding with keyword;It obtains and uses
Family figure;Face image in user's figure is fused to the face image in expression template, obtains fusion expression figure;In social session
Expression figure is merged in middle transmission.
In one embodiment, expression template corresponding with keyword is at least one;Computer program also to handle
Device executes following steps:The option of displaying at least one expression template corresponding with keyword;When detecting the table for displaying
When the first choice operation of the option of feelings template, expression template is chosen according to first choice operation;By face's figure in user's figure
As the face image being fused in expression template, obtaining fusion expression figure includes:Face image in user's figure is fused to choosing
In expression template face image in, obtain fusion expression figure.
In one embodiment, computer program also makes processor execute following steps:Inquiry basis and keyword pair
The fusion expression figure of the history for the expression template generation answered;Show the option of the fusion expression figure of the history inquired;Work as detection
To the option of the fusion expression figure of the history for displaying the second selection operation when, the fusion chosen according to the second selection operation
Expression figure;The fusion expression figure chosen is sent in social session.
In one embodiment, obtaining user's figure includes:Show image acquisition interface;Reception acts on Image Acquisition interface
Image Acquisition operational order;Image Acquisition is carried out in response to Image Acquisition operational order, obtains user's figure.
In one embodiment, obtaining user's figure includes:Access the webpage for generating fusion expression figure;Detection is for net
The picture of page uploads the trigger action of interface;User's figure is obtained according to trigger action;Face image in user's figure is fused to
Face image in expression template obtains fusion expression figure, including:To server upload user figure and expression is sent by webpage
The mark of template;Receive the fusion expression figure of server feedback;The fusion expression figure of feedback is by the face image in user's figure
The face image being fused in the expression template corresponding to the mark of expression template obtains.
In one embodiment, expression template includes the continuous expression figure of multiframe;Face image in user's figure is melted
The face image being bonded in expression template, obtaining fusion expression figure includes:Face image in user's figure is fused to often respectively
Face image in frame expression figure obtains multiframe and continuously merges expression figure;Multiframe is continuously merged to expression figure and is synthesized and is melted
Close expression Dynamic Graph.
In one embodiment, the face image in user's figure is fused to the face image in every frame expression figure respectively,
Obtain multiframe and continuously merge expression figure include:Determine the first face area in user's figure;The is determined in every frame expression figure
Two face areas;Respectively according to the distributing position of facial feature points in the second face area of every frame expression figure, corresponding adjustment is used
The distributing position of facial feature points in first face area of family figure;In the face image of the second face area of every frame expression figure
In, the face image of the first face area in the user's figure accordingly adjusted is merged respectively, obtains fusion expression figure.
In one embodiment, respectively according to the distribution position of facial feature points in the second face area of every frame expression figure
It sets, the distributing position of facial feature points includes in corresponding the first face area for adjusting user's figure:Each second face is determined respectively
Corresponding second distributing position of facial feature points in region;In the first face area, facial feature points corresponding first are determined
Distributing position;According to default deformation parameter, the second distributing position in corresponding each second face area and it is located at the first face
The first distributing position in portion region determines the corresponding target distribution position of each facial feature points;Respectively according to corresponding each mesh
Mark distributing position, second distributing position of the adjustment facial feature points in each corresponding second face area and in the first face area
In corresponding first distributing position;In the face image of the second face area of every frame expression figure, the corresponding adjustment of fusion respectively
User's figure in the first face area face image, obtain fusion expression figure include:After the adjustment of every frame expression figure
In the face image of two face areas, the face image of the first face area in the user's figure accordingly adjusted is merged, is merged
Expression figure.
In one embodiment, computer program also makes processor execute following steps:Obtain respectively with before adjustment
The corresponding mask figure of each second face area;By the distributing position of the facial feature points in each mask figure, by corresponding target point
Cloth position is adjusted;Corresponding target distribution position is facial feature points according to the second facial regions corresponding to mask figure
The target distribution position that the second distributing position in domain is determined;The face of the second face area after the adjustment of every frame expression figure
In portion's image, the face image of the first face area in the user's figure accordingly adjusted is merged, obtaining fusion expression figure includes:By institute
The specified integration region in mask figure after corresponding adjustment determines in the second face area after the adjustment of every frame expression figure
Second face image to be fused, and the first face image to be fused is determined in the first face area accordingly adjusted;In every frame
In the second face image to be fused in the second face area after the adjustment of expression figure, the first facial regions accordingly adjusted are merged
The first face image to be fused in domain obtains fusion expression figure.
In one embodiment, a kind of storage medium being stored with computer program is provided, computer program is handled
When device executes so that processor executes following steps:Obtain the message content inputted in social session;Message content is analyzed,
Obtain corresponding keyword;Determine expression template corresponding with keyword;Obtain user's figure;Face image in user's figure is melted
The face image being bonded in expression template obtains fusion expression figure;Fusion expression figure is sent in social session.
In one embodiment, expression template corresponding with keyword is at least one;Computer program also to handle
Device executes following steps:The option of displaying at least one expression template corresponding with keyword;When detecting the table for displaying
When the first choice operation of the option of feelings template, expression template is chosen according to first choice operation;By face's figure in user's figure
As the face image being fused in expression template, obtaining fusion expression figure includes:Face image in user's figure is fused to choosing
In expression template face image in, obtain fusion expression figure.
In one embodiment, computer program also makes processor execute following steps:Inquiry basis and keyword pair
The fusion expression figure of the history for the expression template generation answered;Show the option of the fusion expression figure of the history inquired;Work as detection
To the option of the fusion expression figure of the history for displaying the second selection operation when, the fusion chosen according to the second selection operation
Expression figure;The fusion expression figure chosen is sent in social session.
In one embodiment, obtaining user's figure includes:Show image acquisition interface;Reception acts on Image Acquisition interface
Image Acquisition operational order;Image Acquisition is carried out in response to Image Acquisition operational order, obtains user's figure.
In one embodiment, obtaining user's figure includes:Access the webpage for generating fusion expression figure;Detection is for net
The picture of page uploads the trigger action of interface;User's figure is obtained according to trigger action;Face image in user's figure is fused to
Face image in expression template obtains fusion expression figure, including:To server upload user figure and expression is sent by webpage
The mark of template;Receive the fusion expression figure of server feedback;The fusion expression figure of feedback is by the face image in user's figure
The face image being fused in the expression template corresponding to the mark of expression template obtains.
In one embodiment, expression template includes the continuous expression figure of multiframe;Face image in user's figure is melted
The face image being bonded in expression template, obtaining fusion expression figure includes:Face image in user's figure is fused to often respectively
Face image in frame expression figure obtains multiframe and continuously merges expression figure;Multiframe is continuously merged to expression figure and is synthesized and is melted
Close expression Dynamic Graph.
In one embodiment, the face image in user's figure is fused to the face image in every frame expression figure respectively,
Obtain multiframe and continuously merge expression figure include:Determine the first face area in user's figure;The is determined in every frame expression figure
Two face areas;Respectively according to the distributing position of facial feature points in the second face area of every frame expression figure, corresponding adjustment is used
The distributing position of facial feature points in first face area of family figure;In the face image of the second face area of every frame expression figure
In, the face image of the first face area in the user's figure accordingly adjusted is merged respectively, obtains fusion expression figure.
In one embodiment, respectively according to the distribution position of facial feature points in the second face area of every frame expression figure
It sets, the distributing position of facial feature points includes in corresponding the first face area for adjusting user's figure:Each second face is determined respectively
Corresponding second distributing position of facial feature points in region;In the first face area, facial feature points corresponding first are determined
Distributing position;According to default deformation parameter, the second distributing position in corresponding each second face area and it is located at the first face
The first distributing position in portion region determines the corresponding target distribution position of each facial feature points;Respectively according to corresponding each mesh
Mark distributing position, second distributing position of the adjustment facial feature points in each corresponding second face area and in the first face area
In corresponding first distributing position;In the face image of the second face area of every frame expression figure, the corresponding adjustment of fusion respectively
User's figure in the first face area face image, obtain fusion expression figure include:After the adjustment of every frame expression figure
In the face image of two face areas, the face image of the first face area in the user's figure accordingly adjusted is merged, is merged
Expression figure.
In one embodiment, computer program also makes processor execute following steps:Obtain respectively with before adjustment
The corresponding mask figure of each second face area;By the distributing position of the facial feature points in each mask figure, by corresponding target point
Cloth position is adjusted;Corresponding target distribution position is facial feature points according to the second facial regions corresponding to mask figure
The target distribution position that the second distributing position in domain is determined;The face of the second face area after the adjustment of every frame expression figure
In portion's image, the face image of the first face area in the user's figure accordingly adjusted is merged, obtaining fusion expression figure includes:By institute
The specified integration region in mask figure after corresponding adjustment determines in the second face area after the adjustment of every frame expression figure
Second face image to be fused, and the first face image to be fused is determined in the first face area accordingly adjusted;In every frame
In the second face image to be fused in the second face area after the adjustment of expression figure, the first facial regions accordingly adjusted are merged
The first face image to be fused in domain obtains fusion expression figure.
It should be understood that although each step in each embodiment of the application is not necessarily to be indicated according to step numbers
Sequence execute successively.Unless expressly stating otherwise herein, there is no stringent sequences to limit for the execution of these steps, these
Step can execute in other order.Moreover, in each embodiment at least part step may include multiple sub-steps or
Multiple stages, these sub-steps or stage are not necessarily to execute completion in synchronization, but can be at different times
Execute, these sub-steps either the stage execution sequence be also not necessarily carry out successively but can with other steps or its
At least part in the sub-step of its step either stage executes in turn or alternately.
One of ordinary skill in the art will appreciate that realizing all or part of flow in above-described embodiment method, being can be with
Relevant hardware is instructed to complete by computer program, the program can be stored in a non-volatile computer and can be read
In storage medium, the program is when being executed, it may include such as the flow of the embodiment of above-mentioned each method.Wherein, provided herein
Each embodiment used in any reference to memory, storage, database or other media, may each comprise non-volatile
And/or volatile memory.Nonvolatile memory may include that read-only memory (ROM), programming ROM (PROM), electricity can be compiled
Journey ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include random access memory
(RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms, such as static state RAM
(SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhanced SDRAM
(ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) directly RAM (RDRAM), straight
Connect memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
Each technical characteristic of above example can be combined arbitrarily, to keep description succinct, not to above-described embodiment
In each technical characteristic it is all possible combination be all described, as long as however, the combination of these technical characteristics be not present lance
Shield is all considered to be the range of this specification record.
Only several embodiments of the present invention are expressed for above example, the description thereof is more specific and detailed, but can not
Therefore it is construed as limiting the scope of the patent.It should be pointed out that for those of ordinary skill in the art,
Under the premise of not departing from present inventive concept, various modifications and improvements can be made, these are all within the scope of protection of the present invention.
Therefore, the protection domain of patent of the present invention should be determined by the appended claims.
Claims (15)
1. a kind of message treatment method, the method includes:
Obtain the message content inputted in social session;
The message content is analyzed, corresponding keyword is obtained;
Determine expression template corresponding with the keyword;
Obtain user's figure;
Face image in user's figure is fused to the face image in the expression template, obtains fusion expression figure;
The fusion expression figure is sent in the social session.
2. according to the method described in claim 1, it is characterized in that, expression template corresponding with the keyword is at least one
It is a;The method further includes:
The option of displaying at least one expression template corresponding with the keyword;
When the operation of the first choice for the option for detecting the expression template for displaying, chosen according to first choice operation
Expression template;
The face image by user's figure is fused to the face image in the expression template, obtains fusion expression figure
Including:
Face image in user's figure is fused in the face image for the expression template chosen, obtains fusion expression
Figure.
3. according to the method described in claim 2, it is characterized in that, the method further includes:
The fusion expression figure of the history of inquiry basis expression template generation corresponding with the keyword;
Show the option of the fusion expression figure of the history inquired;
When the second selection operation of the option for detecting the fusion expression figure for the history of displaying, selected according to described second
Operate the fusion expression figure chosen;
The fusion expression figure chosen is sent in the social session.
4. according to the method in any one of claims 1 to 3, which is characterized in that acquisition user's figure includes:
Show image acquisition interface;
Receive the Image Acquisition operational order for acting on described image acquisition interface;
It is instructed in response to described image acquisition operations and carries out Image Acquisition, obtain user's figure.
5. according to the method in any one of claims 1 to 3, which is characterized in that acquisition user's figure includes:
Access the webpage for generating fusion expression figure;
Detect the trigger action that interface is uploaded for the picture of the webpage;
User's figure is obtained according to the trigger action;
The face image by user's figure is fused to the face image in the expression template, obtains fusion expression
Figure, including:
By the webpage mark that the user schemes and send the expression template is uploaded to server;
Receive the fusion expression figure of the server feedback;The fusion expression figure of feedback is by face's figure in user's figure
As the face image being fused in the expression template corresponding to the mark of the expression template obtains.
6. according to the method in any one of claims 1 to 3, which is characterized in that the expression template includes that multiframe connects
Continuous expression figure;
Face image in the figure by user is fused to the face image in expression template, obtains fusion expression figure and includes:
Face image in user's figure is fused to the face image in every frame expression figure respectively, obtains multiframe continuously fusion table
Feelings figure;
The multiframe is continuously merged into expression figure and synthesizes fusion expression Dynamic Graph.
7. according to the method described in claim 6, it is characterized in that, the face image in the figure by user is fused to often respectively
Face image in frame expression figure, obtain multiframe and continuously merge expression figure include:
Determine the first face area in user's figure;
The second face area is determined in every frame expression figure;
Respectively according to the distributing position of facial feature points in the second face area of every frame expression figure, user's figure is accordingly adjusted
The first face area described in facial feature points distributing position;
In the face image of the second face area of every frame expression figure, the first face in the user's figure accordingly adjusted is merged respectively
The face image in region obtains fusion expression figure.
8. the method according to the description of claim 7 is characterized in that described respectively according to the second face area of every frame expression figure
The distributing position of middle facial feature points accordingly adjusts the distribution of facial feature points described in the first face area of user's figure
Position includes:
Corresponding second distributing position of facial feature points in each second face area is determined respectively;
In first face area, corresponding first distributing position of the facial feature points is determined;
According to default deformation parameter, the second distributing position in corresponding each second face area and it is located at the first face area
In the first distributing position, determine the corresponding target distribution position of each facial feature points;
Respectively according to corresponding each target distribution position, the facial feature points are adjusted in each corresponding second face area
The second distributing position and corresponding first distributing position in first face area;
In the face image of second face area in every frame expression figure, merge respectively first in the user's figure accordingly adjusted
The face image of face area, obtaining fusion expression figure includes:
In the face image of the second face area after the adjustment of every frame expression figure, merge first in the user's figure accordingly adjusted
The face image of face area obtains fusion expression figure.
9. according to the method described in claim 8, it is characterized in that, the method further includes:
Obtain mask figure corresponding with each second face area before adjustment respectively;
By the distributing position of the facial feature points in each mask figure, adjusted by the corresponding target distribution position
It is whole;The corresponding target distribution position is the facial feature points according to the second face corresponding to the mask figure
The target distribution position that the second distributing position in portion region is determined;
In the face image of second face area after the adjustment of every frame expression figure, merge in the user's figure accordingly adjusted
The face image of first face area, obtaining fusion expression figure includes:
By the specified integration region in the mask figure after corresponding adjustment, the second facial regions after the adjustment of every frame expression figure
The second face image to be fused is determined in domain, and the first face to be fused is determined in first face area accordingly adjusted
Image;
In the second face image to be fused in the second face area after the adjustment of every frame expression figure, what fusion accordingly adjusted
The first face image to be fused in first face area obtains fusion expression figure.
10. a kind of message processing apparatus, which is characterized in that described device includes:
Acquisition module, for obtaining the message content inputted in social session;
Template determining module obtains corresponding keyword for analyzing the message content;Determination is corresponding with the keyword
Expression template;
User's figure determining module, for obtaining user's figure;
Merge expression figure generation module, the face for being fused to the face image in user's figure in the expression template
Image obtains fusion expression figure;
Sending module, for sending the fusion expression figure in the social session.
11. device according to claim 10, which is characterized in that the expression template includes the continuous expression of multiframe
Figure;
The fusion expression figure generation module is additionally operable to the face image in user's figure being fused to respectively in every frame expression figure
Face image obtains multiframe and continuously merges expression figure;By the multiframe continuously merge expression figure synthesize fusion expression move
State figure.
12. according to the devices described in claim 11, which is characterized in that the fusion expression figure generation module includes:
Face area determining module, for determining the first face area in user's figure;The is determined in every frame expression figure
Two face areas;
Position adjusting type modules, for respectively according to the distributing position of facial feature points in the second face area of every frame expression figure,
Accordingly adjust the distributing position of facial feature points described in the first face area of user's figure;
Fusion Module, in the face image of the second face area of every frame expression figure, merging the use accordingly adjusted respectively
The face image of first face area in the figure of family obtains fusion expression figure.
13. device according to claim 12, which is characterized in that the position adjusting type modules are additionally operable to determine each institute respectively
State corresponding second distributing position of facial feature points in the second face area;In first face area, the face is determined
Corresponding first distributing position of portion's characteristic point;According to default deformation parameter, second point in corresponding each second face area
Cloth position and the first distributing position in the first face area determine the corresponding target distribution position of each facial feature points
It sets;Respectively according to corresponding each target distribution position, the facial feature points are adjusted in each corresponding second face area
The second distributing position and corresponding first distributing position in first face area;
The Fusion Module is additionally operable in the face image of the second face area after the adjustment of every frame expression figure, and fusion is corresponding
The face image of first face area in user's figure of adjustment obtains fusion expression figure.
14. a kind of computer equipment, including memory and processor, computer program, the meter are stored in the memory
When calculation machine program is executed by processor so that the processor executes the step such as any one of claim 1 to 9 the method
Suddenly.
15. a kind of storage medium being stored with computer program, when the computer program is executed by processor so that processor
It executes such as the step of any one of claim 1 to 9 the method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810119755.1A CN108388557A (en) | 2018-02-06 | 2018-02-06 | Message treatment method, device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810119755.1A CN108388557A (en) | 2018-02-06 | 2018-02-06 | Message treatment method, device, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108388557A true CN108388557A (en) | 2018-08-10 |
Family
ID=63075300
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810119755.1A Pending CN108388557A (en) | 2018-02-06 | 2018-02-06 | Message treatment method, device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108388557A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109918675A (en) * | 2019-03-15 | 2019-06-21 | 福建工程学院 | A kind of the network expression picture automatic generation method and device of context-aware |
CN110163063A (en) * | 2018-11-28 | 2019-08-23 | 腾讯数码(天津)有限公司 | Expression processing method, device, computer readable storage medium and computer equipment |
CN110414404A (en) * | 2019-07-22 | 2019-11-05 | 腾讯科技(深圳)有限公司 | Image processing method, device and storage medium based on instant messaging |
CN110619513A (en) * | 2019-09-11 | 2019-12-27 | 腾讯科技(深圳)有限公司 | Electronic resource obtaining method, electronic resource distributing method and related equipment |
CN110633361A (en) * | 2019-09-26 | 2019-12-31 | 联想(北京)有限公司 | Input control method and device and intelligent session server |
CN111541950A (en) * | 2020-05-07 | 2020-08-14 | 腾讯科技(深圳)有限公司 | Expression generation method and device, electronic equipment and storage medium |
CN112116682A (en) * | 2019-06-20 | 2020-12-22 | 腾讯科技(深圳)有限公司 | Method, device, equipment and system for generating cover picture of information display page |
CN114816599A (en) * | 2021-01-22 | 2022-07-29 | 北京字跳网络技术有限公司 | Image display method, apparatus, device and medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160055370A1 (en) * | 2014-08-21 | 2016-02-25 | Futurewei Technologies, Inc. | System and Methods of Generating User Facial Expression Library for Messaging and Social Networking Applications |
CN105791692A (en) * | 2016-03-14 | 2016-07-20 | 腾讯科技(深圳)有限公司 | Information processing method and terminal |
CN106875460A (en) * | 2016-12-27 | 2017-06-20 | 深圳市金立通信设备有限公司 | A kind of picture countenance synthesis method and terminal |
CN107219917A (en) * | 2017-04-28 | 2017-09-29 | 北京百度网讯科技有限公司 | Emoticon generation method and device, computer equipment and computer-readable recording medium |
CN107578459A (en) * | 2017-08-31 | 2018-01-12 | 北京麒麟合盛网络技术有限公司 | Expression is embedded in the method and device of candidates of input method |
-
2018
- 2018-02-06 CN CN201810119755.1A patent/CN108388557A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160055370A1 (en) * | 2014-08-21 | 2016-02-25 | Futurewei Technologies, Inc. | System and Methods of Generating User Facial Expression Library for Messaging and Social Networking Applications |
CN106415664A (en) * | 2014-08-21 | 2017-02-15 | 华为技术有限公司 | System and methods of generating user facial expression library for messaging and social networking applications |
CN105791692A (en) * | 2016-03-14 | 2016-07-20 | 腾讯科技(深圳)有限公司 | Information processing method and terminal |
CN106875460A (en) * | 2016-12-27 | 2017-06-20 | 深圳市金立通信设备有限公司 | A kind of picture countenance synthesis method and terminal |
CN107219917A (en) * | 2017-04-28 | 2017-09-29 | 北京百度网讯科技有限公司 | Emoticon generation method and device, computer equipment and computer-readable recording medium |
CN107578459A (en) * | 2017-08-31 | 2018-01-12 | 北京麒麟合盛网络技术有限公司 | Expression is embedded in the method and device of candidates of input method |
Non-Patent Citations (1)
Title |
---|
廖广军: "公安数字影像处理与分析", vol. 1, 华南理工大学出版社, pages: 169 - 170 * |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110163063A (en) * | 2018-11-28 | 2019-08-23 | 腾讯数码(天津)有限公司 | Expression processing method, device, computer readable storage medium and computer equipment |
CN110163063B (en) * | 2018-11-28 | 2024-05-28 | 腾讯数码(天津)有限公司 | Expression processing method, apparatus, computer readable storage medium and computer device |
CN109918675A (en) * | 2019-03-15 | 2019-06-21 | 福建工程学院 | A kind of the network expression picture automatic generation method and device of context-aware |
CN112116682B (en) * | 2019-06-20 | 2023-12-12 | 腾讯科技(深圳)有限公司 | Method, device, equipment and system for generating cover picture of information display page |
CN112116682A (en) * | 2019-06-20 | 2020-12-22 | 腾讯科技(深圳)有限公司 | Method, device, equipment and system for generating cover picture of information display page |
WO2021012921A1 (en) * | 2019-07-22 | 2021-01-28 | 腾讯科技(深圳)有限公司 | Image data processing method and apparatus, and electronic device and storage medium |
CN110414404A (en) * | 2019-07-22 | 2019-11-05 | 腾讯科技(深圳)有限公司 | Image processing method, device and storage medium based on instant messaging |
CN110619513A (en) * | 2019-09-11 | 2019-12-27 | 腾讯科技(深圳)有限公司 | Electronic resource obtaining method, electronic resource distributing method and related equipment |
CN110633361A (en) * | 2019-09-26 | 2019-12-31 | 联想(北京)有限公司 | Input control method and device and intelligent session server |
CN111541950B (en) * | 2020-05-07 | 2023-11-03 | 腾讯科技(深圳)有限公司 | Expression generating method and device, electronic equipment and storage medium |
CN111541950A (en) * | 2020-05-07 | 2020-08-14 | 腾讯科技(深圳)有限公司 | Expression generation method and device, electronic equipment and storage medium |
CN114816599A (en) * | 2021-01-22 | 2022-07-29 | 北京字跳网络技术有限公司 | Image display method, apparatus, device and medium |
CN114816599B (en) * | 2021-01-22 | 2024-02-27 | 北京字跳网络技术有限公司 | Image display method, device, equipment and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108388557A (en) | Message treatment method, device, computer equipment and storage medium | |
CN109348276B (en) | video picture adjusting method and device, computer equipment and storage medium | |
JP7112508B2 (en) | Animation stamp generation method, its computer program and computer device | |
CN111105819B (en) | Clipping template recommendation method and device, electronic equipment and storage medium | |
WO2021008166A1 (en) | Method and apparatus for virtual fitting | |
KR20190084278A (en) | Automatic suggestions for sharing images | |
US10778939B2 (en) | Media effects using predicted facial feature locations | |
US11558543B2 (en) | Modifying capture of video data by an image capture device based on video data previously captured by the image capture device | |
US10805521B2 (en) | Modifying capture of video data by an image capture device based on video data previously captured by the image capture device | |
CN110751149A (en) | Target object labeling method and device, computer equipment and storage medium | |
WO2019015522A1 (en) | Emoticon image generation method and device, electronic device, and storage medium | |
CN114500432A (en) | Session message transceiving method and device, electronic equipment and readable storage medium | |
JP2022526053A (en) | Techniques for capturing and editing dynamic depth images | |
CN109472849A (en) | Method, apparatus, terminal device and the storage medium of image in processing application | |
CN108986009A (en) | Generation method, device and the electronic equipment of picture | |
CN111223155B (en) | Image data processing method, device, computer equipment and storage medium | |
CN116208791A (en) | Computer-implemented method and storage medium | |
CN109587040B (en) | Mail processing method, system, computer device and storage medium | |
CN113918070A (en) | Synchronous display method and device, readable storage medium and electronic equipment | |
JP2011192008A (en) | Image processing system and image processing method | |
CN109656995B (en) | Data export method, device, terminal, server and storage medium | |
US10848687B2 (en) | Modifying presentation of video data by a receiving client device based on analysis of the video data by another client device capturing the video data | |
WO2023066100A1 (en) | File sharing method and apparatus | |
CN115174506A (en) | Session information processing method, device, readable storage medium and computer equipment | |
JP7338935B2 (en) | terminal display method, terminal, terminal program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |