CN104753766B - Expression sending method and device - Google Patents

Expression sending method and device Download PDF

Info

Publication number
CN104753766B
CN104753766B CN201510093000.5A CN201510093000A CN104753766B CN 104753766 B CN104753766 B CN 104753766B CN 201510093000 A CN201510093000 A CN 201510093000A CN 104753766 B CN104753766 B CN 104753766B
Authority
CN
China
Prior art keywords
user
expression
opposite end
communication message
facial image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510093000.5A
Other languages
Chinese (zh)
Other versions
CN104753766A (en
Inventor
陈志军
龙飞
张涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaomi Inc
Original Assignee
Xiaomi Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Inc filed Critical Xiaomi Inc
Priority to CN201510093000.5A priority Critical patent/CN104753766B/en
Publication of CN104753766A publication Critical patent/CN104753766A/en
Application granted granted Critical
Publication of CN104753766B publication Critical patent/CN104753766B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The disclosure is directed to a kind of expression sending method and devices, belong to Internet technical field.The described method includes: facial image of the acquisition user when receiving and dispatching communication message;Correspond to user's expression of user's current emotional according to Face image synthesis, which is at least one of picture expression and letter expressing;During receiving and dispatching communication message, user's expression is sent to opposite end.The disclosure solves the problems, such as that the expression picture in the expression library that telecommunication customer end provides can not accurately express the current emotional state of user;Reach in the communication information of user's transmitting-receiving and has carried the user's expression for meeting user's current emotional, the accurate effect for expressing the current emotional state of user.

Description

Expression sending method and device
Technical field
This disclosure relates to Internet technical field, in particular to a kind of expression sending method and device.
Background technique
When being exchanged between user using various telecommunication customer ends, simple word message can be not only sent, may be used also An expression picture is chosen in the expression library that telecommunication customer end provides manually, and by the expression picture together with word message It is sent to another telecommunication customer end.But the expression picture in the expression library of telecommunication customer end offer can not accurately express user Current emotional state.
Summary of the invention
The embodiment of the present disclosure provides a kind of expression sending method and device, the technical solution are as follows:
According to the first aspect of the embodiments of the present disclosure, a kind of expression sending method is provided, this method comprises:
Acquire facial image of the user when receiving and dispatching communication message;
Correspond to user's expression of user's current emotional according to Face image synthesis, which is picture expression and text At least one of word expression;
During receiving and dispatching the communication message, user's expression is sent to opposite end.
Optionally, user's expression of user's current emotional is corresponded to according to Face image synthesis, comprising:
Extract the human face region in facial image;
Image procossing is carried out to human face region, generates the user's expression for corresponding to user's current emotional;Image procossing includes At least one of filter processing, stylization processing and gray proces.
Optionally, user's expression of user's current emotional is corresponded to according to Face image synthesis, comprising:
Extract the human face region in facial image;
Identify the expression type of human face region;
It is selected from preset expression library with the expression of expression type matching as user's expression.
Optionally, during receiving and dispatching communication message, user's expression is sent to opposite end, comprising:
User's expression is determined as the real-time head portrait of user, and is sent to opposite end;Opposite end is for replacing original user head portrait For the real-time head portrait of user;
Or,
User's expression is added to and is needed in the communication message that sends, and sends communication message to opposite end, opposite end for pair The communication message for carrying user's expression is shown.
Optionally, this method further include:
When user's expression is at least two, at least two user's expressions are shown;
Receive the selection signal to one of user's expression;
According to selection signal, corresponding user's expression is determined as to the user's expression for needing to send.
Optionally, facial image of the acquisition user when receiving and dispatching communication message, comprising:
When the application program of front stage operation is communication applications, it is spaced acquisition one at predetermined time intervals by front camera Secondary facial image.
According to the second aspect of an embodiment of the present disclosure, a kind of expression sending device is provided, which includes:
Acquisition module is configured as facial image of the acquisition user when receiving and dispatching communication message;
Generation module is configured as corresponding to user's expression of user's current emotional, Yong Hubiao according to Face image synthesis Feelings are at least one of picture expression and letter expressing;
Sending module is configured as during receiving and dispatching communication message, sends user's expression to opposite end.
Optionally, generation module, comprising:
First extracting sub-module is configured as extracting the human face region in facial image;
Submodule is handled, is configured as carrying out image procossing to human face region, generates the use for corresponding to user's current emotional Family expression;Image procossing includes at least one of filter processing, stylization processing and gray proces.
Optionally, generation module, comprising:
Second extracting sub-module is configured as extracting the human face region in facial image;
It identifies submodule, is configured as the expression type of identification human face region;
Submodule is selected, is configured as selecting from preset expression library with the expression of expression type matching as user's table Feelings.
Optionally, sending module, comprising:
First sending submodule is configured as user's expression being determined as the real-time head portrait of user, and is sent to opposite end;Opposite end For original user head portrait to be replaced with the real-time head portrait of user;
Or,
Second sending submodule is configured as being added to user's expression in the communication message for needing to send, and sends logical Message is interrogated to opposite end, opposite end is for showing the communication message for carrying user's expression.
Optionally, the device further include:
Display module is configured as showing at least two user's expressions when user's expression is at least two;
Receiving module is configured as receiving the selection signal to one of user's expression;
Determining module is configured as that corresponding user's expression is determined as to the user's table for needing to send according to selection signal Feelings.
Optionally, acquisition module, comprising:
Acquisition module is additionally configured to when the application program of front stage operation is communication applications, every by front camera A facial image is acquired every predetermined time interval.
According to the third aspect of an embodiment of the present disclosure, a kind of expression sending device is provided, which includes:
Processor;
The memory of executable instruction for storage processor;
Wherein, processor is configured as:
Acquire facial image of the user when receiving and dispatching communication message;
Correspond to user's expression of user's current emotional according to Face image synthesis, user's expression is picture expression and text At least one of expression;
During receiving and dispatching communication message, user's expression is sent to opposite end.
The technical solution that the embodiment of the present disclosure provides can include the following benefits:
By receiving and dispatching the corresponding user's expression of Face image synthesis when communication information according to collected user, and should User's expression is sent to opposite end;The expression picture solved in the expression library of telecommunication customer end offer can not accurately express user The problem of current emotional state;Reach in the communication information of user's transmitting-receiving and has carried the user's table for meeting user's current emotional Feelings, the accurate effect for expressing the current emotional state of user.
It should be understood that above general description and following detailed description be only it is exemplary and explanatory, not The disclosure can be limited.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and shows the implementation for meeting the disclosure Example, and together with specification for explaining the principles of this disclosure.
Fig. 1 is expression sending method flow chart shown according to an exemplary embodiment;
Fig. 2A is the expression sending method flow chart shown according to another exemplary embodiment;
Fig. 2 B is the implementation diagram of the expression sending method shown according to another exemplary embodiment;
Fig. 2 C is the implementation diagram of the expression sending method shown according to another exemplary embodiment;
Fig. 3 is according to the expression sending method flow chart shown in another exemplary embodiment;
Fig. 4 is the structural block diagram of expression sending device shown according to an exemplary embodiment;
Fig. 5 is the structural block diagram of the expression sending device shown according to another exemplary embodiment;
Fig. 6 is the block diagram of expression sending device shown according to an exemplary embodiment.
Through the above attached drawings, it has been shown that the specific embodiment of the disclosure will be hereinafter described in more detail.These attached drawings It is not intended to limit the scope of this disclosure concept by any means with verbal description, but is by referring to specific embodiments Those skilled in the art illustrate the concept of the disclosure.
Specific embodiment
Example embodiments are described in detail here, and the example is illustrated in the accompanying drawings.Following description is related to When attached drawing, unless otherwise indicated, the same numbers in different drawings indicate the same or similar elements.Following exemplary embodiment Described in embodiment do not represent all implementations consistent with this disclosure.On the contrary, they be only with it is such as appended The example of the consistent device and method of some aspects be described in detail in claims, the disclosure.
The expression sending method that each embodiment of the present invention provides, can by be equipped with the electronic equipment of telecommunication customer end Lai It realizes.The electronic equipment can be smart phone, tablet computer, E-book reader and pocket computer on knee, and (camera is taken the photograph Camera) etc..
To simplify the description, hereinafter only with expression sending method by telecommunication customer end execution come for example, but to this Restriction is not constituted.
Fig. 1 is expression sending method flow chart shown according to an exemplary embodiment, and the present embodiment is with expression transmission For being illustrated in telecommunication customer end, which may include steps of method.
In a step 102, facial image of the acquisition user when receiving and dispatching communication message.
When the application program of front stage operation is communication applications, it is spaced acquisition one at predetermined time intervals by front camera Secondary facial image.
At step 104, user's expression of user's current emotional is corresponded to according to Face image synthesis, which is At least one of picture expression and letter expressing.
In step 106, during receiving and dispatching communication message, user's expression is sent to opposite end.
In conclusion the expression sending method that the present exemplary embodiment provides, logical by being received and dispatched according to collected user The corresponding user's expression of Face image synthesis when information is interrogated, and user's expression is sent to opposite end;Solves communication client The problem of holding the expression picture in the expression library provided that can not accurately express user's current emotional state;User's receipts are reached The user's expression for meeting user's current emotional is carried in the communication information of hair, the accurate effect for expressing the current emotional state of user Fruit.
Telecommunication customer end can acquire the facial image of user by front camera, and collected facial image is carried out Image procossing generates corresponding user's expression, and is sent to opposite end for user's expression as the real-time head portrait of user, so that opposite end The real-time head portrait of the user that user sees meets the expression of user at this time, and more vivid.An exemplary reality is used below Example is applied to be illustrated.
Fig. 2A is the expression sending method flow chart shown according to another exemplary embodiment, and the present embodiment is with expression hair For being illustrated in smart phone, which may include steps of delivery method.
In step 201, when the application program of front stage operation is communication applications, by front camera every pre- timing Between interval acquisition facial image.
User is receiving and dispatching communication information, i.e. smart phone foreground in the telecommunication customer end by being mounted in smart phone When the application of operation is communication applications, the front camera being arranged in smart phone is spaced people of acquisition at predetermined time intervals Face image, the time predetermined time interval may be the same or different.Wherein, which can be instant messaging Client or enrichment telecommunication customer end etc., the disclosure is defined not to this.
For example, when smart phone detects that telecommunication customer end transmitting-receiving communication information is used in user, it can be by preposition Camera is every the facial image of acquisition in 1 second;For another example, when input information is not detected, i.e., user, which checks, receives Communication information when, acquire a facial image, when detected text or voice etc. input information when, i.e., user is replying When communication information, then acquire a facial image.
In step 202, the human face region in facial image is extracted.
It the problems such as due to front camera shooting angle, can by the face in the collected facial image of front camera It can not include face, so in order to improve the accuracy of user's expression of generation, telecommunication customer end is also needed to collected people Whether face image carries out Face datection, detect comprising face in collected facial image, and to the face figure for not including face As being filtered.Wherein, the Face datection algorithm based on iterative algorithm can be used by carrying out Face datection, and the disclosure is not to this It is defined.
Due to needing according to Face image synthesis user's expression, so telecommunication customer end needs to extract the people in facial image Face region, and according to the corresponding user's expression of the face Area generation.Wherein, the human face region extracted in facial image is generally adopted The human face characteristic point in facial image is positioned with the statistical model method established based on training set, and according to positioning after Human face characteristic point carries out the extraction of human face region, and the disclosure is not defined the method for extracting human face region.
In step 203, image procossing is carried out to human face region, generates the user's expression for corresponding to user's current emotional; The image procossing includes at least one of filter processing, stylization processing and gray proces.
In order to make to generate user's expression is more vivid and form is more abundant, telecommunication customer end is extracting facial image In human face region after, it is also necessary to corresponding image procossing is carried out to the human face region that extracts, generates that correspond to user current User's expression of mood.
Due to needing to position in above-mentioned steps 202 to the human face characteristic point in facial image, so communication client End can further identify human face expression according to human face characteristic point, i.e., determine that user works as cause according to human face characteristic point Thread, and corresponding image procossing is carried out according to determining user's current emotional, thus prominent user's current emotional.Wherein, the figure As processing can be filter processing, stylization processing or gray proces etc..
For example, telecommunication customer end determines that user is current according to human face characteristic point when the image procossing is stylization processing Mood is happy, and carries out stylized processing to human face region, generates the user's expression for meeting happy feature.
For another example, when the image procossing is that filter is handled, telecommunication customer end determines that user is current according to human face characteristic point Mood be it is happy, then filter can be used, bright-colouredization is carried out to the color of human face region, correspond to happy user so that generating Expression.
It should be noted that telecommunication customer end can also add in user's expression of generation pair according to user's current emotional The default mark answered, the default mark can be words identification or image identification, for example, when user's current emotional is happy, A sun image can be added in user's expression of generation;It, can be in the use of generation when user's current emotional is difficult out-of-date A black clouds image is added in the expression of family.
In step 204, when user's expression is at least two, at least two user's expressions are shown.
Since front camera can be spaced facial image of acquisition at predetermined time intervals, so receiving and dispatching communication letter in user During breath, at least two user's facial images may be collected, corresponding, telecommunication customer end may generate at least two A user's expression.When telecommunication customer end generates at least two user's expressions, at least two user's expressions can be shown, The user's expression for wanting to send is selected for user.
As a kind of possible implementation, telecommunication customer end can work as cause according to the corresponding user of each user's expression Thread classifies at least two user's expressions, and selects at least one user's expression from each classification and show, for Family is selected.
For example, as shown in Figure 2 B, telecommunication customer end 21 generates user's expression according to collected three facial images respectively 22, user's expression 22 and user's expression 23 are classified as " sad ", user's expression 24 are divided by user's expression 23 and user's expression 24 Class is " indignation ", and is shown at least one user's expression in each classification, is selected for user.
In step 205, the selection signal to one of user's expression is received.
Telecommunication customer end receives user to the selection signal of user's expression of display, so that it is determined that the user's table for needing to send Feelings.
In step 206, according to selection signal, corresponding user's expression is determined as to the user's expression for needing to send.
In step 207, user's expression is determined as the real-time head portrait of user, and is sent to opposite end;Opposite end is used for will be original User's head portrait replaces with the real-time head portrait of user.
User's expression that user selects is determined as the real-time head portrait of user by telecommunication customer end, and is sent to opposite end.It uses opposite end When family receives the user real-time head portrait using telecommunication customer end, original user head portrait is replaced with into the real-time head portrait of the user, is made The expression that user is obtained when can more intuitively recognize user's transmitting-receiving communication information.
For example, as shown in Figure 2 C, user's expression 22, user's expression 23 and the user's expression 24 of 21 pairs of telecommunication customer end generations It is shown, and receives the selection signal of user to user expression 24, user's expression 24 is determined as to the real-time head portrait of user, And it is sent to telecommunication customer end 25, original user head portrait is replaced with user's expression 24 by telecommunication customer end 25, is shown.
It should be noted that above-mentioned steps 204 to step 206 is optional step, i.e. telecommunication customer end can be by generation At least two user's expressions are determined as the real-time head portrait of user, and are sent to opposite end together.
After peer user receives the real-time head portrait of at least two users using telecommunication customer end, to the real-time head of at least two users As carry out round show, user can more intuitively recognize the other user from check communication information to send communication information when Expression shape change.
In above-described embodiment, only it is illustrated so that user's expression is picture expression as an example, the disclosure is not constituted and limited It is fixed.
In conclusion the expression sending method that the present exemplary embodiment provides, logical by being received and dispatched according to collected user The corresponding user's expression of Face image synthesis when information is interrogated, and user's expression is sent to opposite end;Solves communication client The problem of holding the expression picture in the expression library provided that can not accurately express user's current emotional state;User's receipts are reached The user's expression for meeting user's current emotional is carried in the communication information of hair, the accurate effect for expressing the current emotional state of user Fruit.
The expression sending method that the present exemplary embodiment provides, also by extracting the human face region in facial image, and it is right Human face region carries out image procossing, generates user's expression, so that the form of the user's expression generated is more abundant, more accurate table The emotional state current up to user.
The expression sending method that the present exemplary embodiment provides, it is also real-time by the way that user's expression of generation is determined as user Head portrait, and be sent to opposite end, has opposite end that original user's head portrait is replaced with the real-time head portrait of user, enable peer user more Add the expression shape change intuitively recognized when active user receives and dispatches communication information.
Fig. 3 is according to the expression sending method flow chart shown in another exemplary embodiment, and the present embodiment is with expression hair For being illustrated in smart phone, which may include steps of delivery method.
In step 301, when the application program of front stage operation is communication applications, by front camera every pre- timing Between interval acquisition facial image.
The implementation of this step is similar to above-mentioned steps 201, and details are not described herein.
In step 302, the human face region in facial image is extracted.
Similar with above-mentioned steps 202, telecommunication customer end can use the Face datection algorithm based on iterative algorithm, detection In collected facial image whether include face, and in detecting facial image include face when, using be based on training set The statistical model method of foundation positions the human face characteristic point in facial image, and is clicked through according to the face characteristic after positioning The extraction of row human face region.
In step 303, the expression type of human face region is identified.
Telecommunication customer end further identifies the expression of the human face region after the human face region in the facial image extracted Type.
Due to carrying out being identified as the prior art, the disclosure to the expression type of human face region according to the human face characteristic point of positioning It also repeats no more herein.
In step 304, it is selected from preset expression library with the expression of expression type matching as user's expression.
Telecommunication customer end is stored at least one expression in preset expression library, which can be according to acquisition in advance The user's Face image synthesis arrived, it is also possible to the expression that user pre-deposits, and each expression corresponds at least one expression Type.Wherein, the expression of the storage in expression library and the corresponding relationship of expression type can be as shown in Table 1.
Table one
Expression type Expression Expression storage address
Happily, glad Expression A Address A
It is sad, sad Expression B Address B
Indignation Expression C Address C
Telecommunication customer end is searched and the expression class in expression library according to the corresponding expression type of the human face region identified The matched expression of type, and the expression is obtained from the expression storage address of the expression, as user's expression.
In step 305, user's expression is added to and is needed in the communication message that sends, and send communication message to right End, opposite end is for showing the communication message for carrying user's expression.
User's expression is automatically added in the communication information for needing to send by telecommunication customer end, is sent out together with the communication information It send to opposite end.When there are multiple user's expressions, telecommunication customer end can also be according to one Dynamic Graph of multiple user's expression generations Piece, and the dynamic picture is added in communication information, opposite end, can be straight by dynamic picture after receiving the dynamic picture See recognize the other user from check communication information to send communication information when expression shape change.
In conclusion the expression sending method that the present exemplary embodiment provides, logical by being received and dispatched according to collected user The corresponding user's expression of Face image synthesis when information is interrogated, and user's expression is sent to opposite end;Solves communication client The problem of holding the expression picture in the expression library provided that can not accurately express user's current emotional state;User's receipts are reached The user's expression for meeting user's current emotional is carried in the communication information of hair, the accurate effect for expressing the current emotional state of user Fruit.
The expression sending method that the present exemplary embodiment provides, also by the expression type of identification human face region, and according to The expression type selects matched expression as user's expression from preset expression library, and is added to and the communication sent is needed to disappear In breath, the efficiency that user sends user's expression is improved.
It should be noted that in the above exemplary embodiments, step 207 can be exchanged with step 305, i.e. step 201 can become an individual embodiment to step 206 and step 305, and step 301 to step 304 and step 207 can be at For an individual embodiment, the disclosure is not limited thereto.
Following is embodiment of the present disclosure, can be used for executing embodiments of the present disclosure.It is real for disclosure device Undisclosed details in example is applied, embodiments of the present disclosure is please referred to.
Fig. 4 is the structural block diagram of expression sending device shown according to an exemplary embodiment, the expression sending device The part of the electronic equipment for being equipped with telecommunication customer end or complete can be become by being implemented in combination with for software, hardware or both Portion.The expression sending device may include:
Acquisition module 402 is configured as facial image of the acquisition user when receiving and dispatching communication message;
Generation module 404 is configured as corresponding to user's expression of user's current emotional, user according to Face image synthesis Expression is at least one of picture expression and letter expressing;
Sending module 406 is configured as during receiving and dispatching communication message, sends user's expression to opposite end.
In conclusion the expression sending device that the present exemplary embodiment provides, logical by being received and dispatched according to collected user The corresponding user's expression of Face image synthesis when information is interrogated, and user's expression is sent to opposite end;Solves communication client The problem of holding the expression picture in the expression library provided that can not accurately express user's current emotional state;User's receipts are reached The user's expression for meeting user's current emotional is carried in the communication information of hair, the accurate effect for expressing the current emotional state of user Fruit.
Fig. 5 is the structural block diagram of the expression sending device shown according to another exemplary embodiment, which sends dress Set can by software, hardware or both be implemented in combination with become be equipped with telecommunication customer end electronic equipment part or All.The expression sending device may include:
Acquisition module 502 is configured as facial image of the acquisition user when receiving and dispatching communication message;
Generation module 504 is configured as corresponding to user's expression of user's current emotional, user according to Face image synthesis Expression is at least one of picture expression and letter expressing;
Sending module 506 is configured as during receiving and dispatching communication message, sends user's expression to opposite end.
Optionally, generation module 504, comprising:
First extracting sub-module 504A is configured as extracting the human face region in facial image;
Submodule 504B is handled, is configured as carrying out image procossing to human face region, generates and correspond to user's current emotional User's expression;Image procossing includes at least one of filter processing, stylization processing and gray proces.
Optionally, generation module 504, comprising:
Second extracting sub-module 504C is configured as extracting the human face region in facial image;
It identifies submodule 504D, is configured as the expression type of identification human face region;
Submodule 504E is selected, is configured as selecting from preset expression library with the expression of expression type matching as use Family expression.
Optionally, sending module 506, comprising:
First sending submodule 506A is configured as user's expression being determined as the real-time head portrait of user, and is sent to opposite end; Opposite end is used to original user head portrait replacing with the real-time head portrait of user;
Or,
Second sending submodule 506B is configured as being added to user's expression in the communication message for needing to send, concurrently Send communication message to opposite end, opposite end is for showing the communication message for carrying user's expression.
Optionally, the device further include:
Display module 507 is configured as showing at least two user's expressions when user's expression is at least two;
Receiving module 508 is configured as receiving the selection signal to one of user's expression;
Determining module 509 is configured as that corresponding user's expression is determined as to the user for needing to send according to selection signal Expression.
Optionally, acquisition module 502, comprising:
Acquisition module 502 is additionally configured to pass through front camera when the application program of front stage operation is communication applications Facial image of interval acquisition at predetermined time intervals.
In conclusion the expression sending device that the present exemplary embodiment provides, logical by being received and dispatched according to collected user The corresponding user's expression of Face image synthesis when information is interrogated, and user's expression is sent to opposite end;Solves communication client The problem of holding the expression picture in the expression library provided that can not accurately express user's current emotional state;User's receipts are reached The user's expression for meeting user's current emotional is carried in the communication information of hair, the accurate effect for expressing the current emotional state of user Fruit.
The expression sending device that the present exemplary embodiment provides, also by extracting the human face region in facial image, and it is right Human face region carries out image procossing, generates user's expression, so that the form of the user's expression generated is more abundant, more accurate table The emotional state current up to user.
The expression sending device that the present exemplary embodiment provides, it is also real-time by the way that user's expression of generation is determined as user Head portrait, and be sent to opposite end, has opposite end that original user's head portrait is replaced with the real-time head portrait of user, enable peer user more Add the expression shape change intuitively recognized when active user receives and dispatches communication information.
The expression sending device that the present exemplary embodiment provides, also by the expression type of identification human face region, and according to The expression type selects matched expression as user's expression from preset expression library, and is added to and the communication sent is needed to disappear In breath, the efficiency that user sends user's expression is improved.
Fig. 6 is the block diagram of expression sending device 600 shown according to an exemplary embodiment.For example, device 600 can be The electronic equipment of telecommunication customer end is installed.
Referring to Fig. 6, device 600 may include following one or more components: processing component 602, memory 604, power supply Component 606, multimedia component 608, audio component 610, the interface 612 of input/output (I/O), sensor module 614, and Communication component 616.
The integrated operation of the usual control device 600 of processing component 602, such as with display, telephone call, data communication, phase Machine operation and record operate associated operation.Processing component 602 may include that one or more processors 620 refer to execute It enables, to perform all or part of the steps of the methods described above.In addition, processing component 602 may include one or more modules, just Interaction between processing component 602 and other assemblies.For example, processing component 602 may include multi-media module, it is more to facilitate Interaction between media component 608 and processing component 602.
Memory 604 is configured as storing various types of data to support the operation in device 600.These data are shown Example includes the instruction of any application or method for operating on device 600, contact data, and telephone book data disappears Breath, picture, video etc..Memory 604 can be by any kind of volatibility or non-volatile memory device or their group It closes and realizes, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM) is erasable to compile Journey read-only memory (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, flash Device, disk or CD.
Power supply module 606 provides electric power for the various assemblies of device 600.Power supply module 606 may include power management system System, one or more power supplys and other with for device 600 generate, manage, and distribute the associated component of electric power.
Multimedia component 608 includes the screen of one output interface of offer between described device 600 and user.One In a little embodiments, screen may include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel, screen Curtain may be implemented as touch screen, to receive input signal from the user.Touch panel includes one or more touch sensings Device is to sense the gesture on touch, slide, and touch panel.The touch sensor can not only sense touch or sliding action Boundary, but also detect duration and pressure associated with the touch or slide operation.In some embodiments, more matchmakers Body component 608 includes a front camera and/or rear camera.When device 600 is in operation mode, such as screening-mode or When video mode, front camera and/or rear camera can receive external multi-medium data.Each front camera and Rear camera can be a fixed optical lens system or have focusing and optical zoom capabilities.
Audio component 610 is configured as output and/or input audio signal.For example, audio component 610 includes a Mike Wind (MIC), when device 600 is in operation mode, when such as call mode, recording mode, and voice recognition mode, microphone is matched It is set to reception external audio signal.The received audio signal can be further stored in memory 604 or via communication set Part 616 is sent.In some embodiments, audio component 610 further includes a loudspeaker, is used for output audio signal.
I/O interface 612 provides interface between processing component 602 and peripheral interface module, and above-mentioned peripheral interface module can To be keyboard, click wheel, button etc..These buttons may include, but are not limited to: home button, volume button, start button and lock Determine button.
Sensor module 614 includes one or more sensors, and the state for providing various aspects for device 600 is commented Estimate.For example, sensor module 614 can detecte the state that opens/closes of device 600, and the relative positioning of component, for example, it is described Component is the display and keypad of device 600, and sensor module 614 can be with 600 1 components of detection device 600 or device Position change, the existence or non-existence that user contacts with device 600,600 orientation of device or acceleration/deceleration and device 600 Temperature change.Sensor module 614 may include proximity sensor, be configured to detect without any physical contact Presence of nearby objects.Sensor module 614 can also include optical sensor, such as CMOS or ccd image sensor, at As being used in application.In some embodiments, which can also include acceleration transducer, gyro sensors Device, Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 616 is configured to facilitate the communication of wired or wireless way between device 600 and other equipment.Device 600 can access the wireless network based on communication standard, such as WiFi, 2G or 3G or their combination.In an exemplary implementation In example, communication component 616 receives broadcast singal or broadcast related information from external broadcasting management system via broadcast channel. In one exemplary embodiment, the communication component 616 further includes near-field communication (NFC) module, to promote short range communication.Example Such as, NFC module can be based on radio frequency identification (RFID) technology, Infrared Data Association (IrDA) technology, ultra wide band (UWB) technology, Bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, device 600 can be believed by one or more application specific integrated circuit (ASIC), number Number processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic components are realized, for executing the above method.
In the exemplary embodiment, a kind of non-transitorycomputer readable storage medium including instruction, example are additionally provided It such as include the memory 604 of instruction, above-metioned instruction can be executed by the processor 620 of device 600 to complete the above method.For example, The non-transitorycomputer readable storage medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk With optical data storage devices etc..
A kind of non-transitorycomputer readable storage medium, when the instruction in the storage medium is by the processing of device 600 When device executes, so that device 600 is able to carry out the electronic equipment expression sending method for being applied to be equipped with telecommunication customer end.
Those skilled in the art after considering the specification and implementing the invention disclosed here, will readily occur to its of the disclosure Its embodiment.This application is intended to cover any variations, uses, or adaptations of the disclosure, these modifications, purposes or Person's adaptive change follows the general principles of this disclosure and including the undocumented common knowledge in the art of the disclosure Or conventional techniques.The description and examples are only to be considered as illustrative, and the true scope and spirit of the disclosure are by following Claim is pointed out.
It should be understood that the present disclosure is not limited to the precise structures that have been described above and shown in the drawings, and And various modifications and changes may be made without departing from the scope thereof.The scope of the present disclosure is only limited by the accompanying claims.

Claims (8)

1. a kind of expression sending method, which is characterized in that the described method includes:
Facial image of the user when receive and dispatch communication message is acquired, when including the application program in front stage operation for communication applications, It is spaced the primary facial image of acquisition at predetermined time intervals by front camera;
According to the Face image synthesis correspond to user's current emotional user's expression, user's expression be picture expression and At least one of letter expressing;
During receiving and dispatching the communication message, user's expression is sent to opposite end, wherein in user's table of generation When feelings are at least two, at least two user's expressions of generation are determined as the real-time head portrait of user, and be sent to opposite end together, So that the opposite end is after receiving at least two real-time head portraits of user, at least two real-time head portraits of user are carried out Round is shown;Alternatively, user's expression is added to what needs were sent when user's expression of generation is at least two In communication message, and send the communication message to the opposite end, the opposite end is used to lead to carrying user's expression News message is shown.
2. the method according to claim 1, wherein described work as according to the Face image synthesis corresponding to user User's expression of preceding mood, comprising:
Extract the human face region in the facial image;
Image procossing is carried out to the human face region, generates the user's expression for corresponding to user's current emotional;Described image processing Including at least one of filter processing, stylization processing and gray proces.
3. the method according to claim 1, wherein described work as according to the Face image synthesis corresponding to user User's expression of preceding mood, comprising:
Extract the human face region in the facial image;
Identify the expression type of the human face region;
It is selected from preset expression library with the expression of the expression type matching as user's expression.
4. a kind of expression sending device, which is characterized in that described device includes:
Acquisition module is configured as facial image of the acquisition user when receiving and dispatching communication message, including the application in front stage operation When program is communication applications, it is spaced the primary facial image of acquisition at predetermined time intervals by front camera;
Generation module is configured as corresponding to user's expression of user's current emotional, the use according to the Face image synthesis Family expression is at least one of picture expression and letter expressing;
Sending module is configured as during receiving and dispatching the communication message, sends user's expression to opposite end;
Wherein, the sending module, comprising:
First sending submodule is configured as when user's expression of generation is at least two, by at least two of generation User's expression is determined as the real-time head portrait of user, and is sent to opposite end together, so that the opposite end is receiving at least two institutes After stating the real-time head portrait of user, round is carried out at least two real-time head portraits of user and is shown;
Alternatively, the sending module, comprising:
Second sending submodule is configured as adding user's expression when user's expression of generation is at least two It is added in the communication message for needing to send, and sends the communication message to the opposite end, the opposite end is used for carrying The communication message for stating user's expression is shown.
5. device according to claim 4, which is characterized in that the generation module, comprising:
First extracting sub-module is configured as extracting the human face region in the facial image;
Submodule is handled, is configured as carrying out image procossing to the human face region, generates the use for corresponding to user's current emotional Family expression;Described image processing includes at least one of filter processing, stylization processing and gray proces.
6. device according to claim 4, which is characterized in that the generation module, comprising:
Second extracting sub-module is configured as extracting the human face region in the facial image;
It identifies submodule, is configured as identifying the expression type of the human face region;
Submodule is selected, is configured as selecting from preset expression library with the expression of the expression type matching as the use Family expression.
7. a kind of expression sending device characterized by comprising
Processor;
For storing the memory of the executable instruction of the processor;
Wherein, the processor is configured to:
Facial image of the user when receive and dispatch communication message is acquired, when including the application program in front stage operation for communication applications, It is spaced the primary facial image of acquisition at predetermined time intervals by front camera;
According to the Face image synthesis correspond to user's current emotional user's expression, user's expression be picture expression and At least one of letter expressing;
During receiving and dispatching the communication message, user's expression is sent to opposite end, wherein in user's table of generation When feelings are at least two, at least two user's expressions of generation are determined as the real-time head portrait of user, and be sent to opposite end together, So that the opposite end is after receiving at least two real-time head portraits of user, at least two real-time head portraits of user are carried out Round is shown;Alternatively, user's expression is added to what needs were sent when user's expression of generation is at least two In communication message, and send the communication message to the opposite end, the opposite end is used to lead to carrying user's expression News message is shown.
8. a kind of computer readable storage medium, instruction is stored on the computer readable storage medium, which is characterized in that institute State the step of claim 1-3 described in any item methods are realized when instruction is executed by processor.
CN201510093000.5A 2015-03-02 2015-03-02 Expression sending method and device Active CN104753766B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510093000.5A CN104753766B (en) 2015-03-02 2015-03-02 Expression sending method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510093000.5A CN104753766B (en) 2015-03-02 2015-03-02 Expression sending method and device

Publications (2)

Publication Number Publication Date
CN104753766A CN104753766A (en) 2015-07-01
CN104753766B true CN104753766B (en) 2019-03-22

Family

ID=53592908

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510093000.5A Active CN104753766B (en) 2015-03-02 2015-03-02 Expression sending method and device

Country Status (1)

Country Link
CN (1) CN104753766B (en)

Families Citing this family (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101606655B1 (en) 2007-09-24 2016-03-25 애플 인크. Embedded authentication systems in an electronic device
US8600120B2 (en) 2008-01-03 2013-12-03 Apple Inc. Personal computing device control using face detection and recognition
US9002322B2 (en) 2011-09-29 2015-04-07 Apple Inc. Authentication with secondary approver
US9898642B2 (en) 2013-09-09 2018-02-20 Apple Inc. Device, method, and graphical user interface for manipulating user interfaces based on fingerprint sensor inputs
US10482461B2 (en) 2014-05-29 2019-11-19 Apple Inc. User interface for payments
CN105578113A (en) * 2016-02-02 2016-05-11 北京小米移动软件有限公司 Video communication method, device and system
CN105744206A (en) * 2016-02-02 2016-07-06 北京小米移动软件有限公司 Video communication method, device and system
CN106059890B (en) * 2016-05-09 2019-04-12 珠海市魅族科技有限公司 Information displaying method and system
DK179186B1 (en) 2016-05-19 2018-01-15 Apple Inc REMOTE AUTHORIZATION TO CONTINUE WITH AN ACTION
CN105871695B (en) * 2016-05-19 2019-03-26 腾讯科技(深圳)有限公司 Expression sending method and device
CN105871696B (en) * 2016-05-25 2020-02-18 维沃移动通信有限公司 Information sending and receiving method and mobile terminal
CN106339103A (en) * 2016-08-15 2017-01-18 珠海市魅族科技有限公司 Image checking method and device
CN109691074A (en) * 2016-09-23 2019-04-26 苹果公司 The image data of user's interaction for enhancing
CN106886606A (en) * 2017-03-21 2017-06-23 联想(北京)有限公司 Method and system for recommending expression according to user speech
DK179867B1 (en) 2017-05-16 2019-08-06 Apple Inc. RECORDING AND SENDING EMOJI
KR20230144661A (en) 2017-05-16 2023-10-16 애플 인크. Emoji recording and sending
CN110019883A (en) * 2017-07-18 2019-07-16 腾讯科技(深圳)有限公司 Obtain the method and device of expression picture
CN107451560B (en) * 2017-07-31 2020-05-19 Oppo广东移动通信有限公司 User expression recognition method and device and terminal
KR102185854B1 (en) 2017-09-09 2020-12-02 애플 인크. Implementation of biometric authentication
KR102389678B1 (en) 2017-09-09 2022-04-21 애플 인크. Implementation of biometric authentication
CN109688041B (en) * 2017-10-18 2021-08-24 腾讯科技(深圳)有限公司 Information processing method and device, server, intelligent terminal and storage medium
CN107729543A (en) * 2017-10-31 2018-02-23 上海掌门科技有限公司 Expression picture recommends method and apparatus
WO2019090603A1 (en) * 2017-11-09 2019-05-16 深圳传音通讯有限公司 Expression adding method and adding apparatus based on photography function
DK180078B1 (en) 2018-05-07 2020-03-31 Apple Inc. USER INTERFACE FOR AVATAR CREATION
US11170085B2 (en) 2018-06-03 2021-11-09 Apple Inc. Implementation of biometric authentication
WO2019241920A1 (en) * 2018-06-20 2019-12-26 优视科技新加坡有限公司 Terminal control method and device
CN109215007B (en) * 2018-09-21 2022-04-12 维沃移动通信有限公司 Image generation method and terminal equipment
US11100349B2 (en) 2018-09-28 2021-08-24 Apple Inc. Audio assisted enrollment
US10860096B2 (en) 2018-09-28 2020-12-08 Apple Inc. Device control using gaze information
US11107261B2 (en) 2019-01-18 2021-08-31 Apple Inc. Virtual avatar animation based on facial feature movement
CN113785539A (en) * 2019-04-10 2021-12-10 Oppo广东移动通信有限公司 System and method for dynamically recommending input based on recognition of user emotion
CN110264544B (en) * 2019-05-30 2023-08-25 腾讯科技(深圳)有限公司 Picture processing method and device, storage medium and electronic device
CN110597384A (en) * 2019-08-23 2019-12-20 苏州佳世达光电有限公司 Information communication method and system

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004349851A (en) * 2003-05-20 2004-12-09 Ntt Docomo Inc Portable terminal, image communication program, and image communication method
US8160549B2 (en) * 2004-02-04 2012-04-17 Google Inc. Mood-based messaging
CN101710910A (en) * 2009-12-09 2010-05-19 深圳华为通信技术有限公司 Method for transmitting emotion information of terminal user and mobile terminal
CN102479388A (en) * 2010-11-22 2012-05-30 北京盛开互动科技有限公司 Expression interaction method based on face tracking and analysis
KR20130065846A (en) * 2011-12-02 2013-06-20 삼성전자주식회사 Apparatus and method for sharing users' emotion
KR101988279B1 (en) * 2013-01-07 2019-06-12 삼성전자 주식회사 Operating Method of User Function based on a Face Recognition and Electronic Device supporting the same
CN103926997A (en) * 2013-01-11 2014-07-16 北京三星通信技术研究有限公司 Method for determining emotional information based on user input and terminal
CN103886632A (en) * 2014-01-06 2014-06-25 宇龙计算机通信科技(深圳)有限公司 Method for generating user expression head portrait and communication terminal
CN104063683B (en) * 2014-06-06 2017-05-17 北京搜狗科技发展有限公司 Expression input method and device based on face identification

Also Published As

Publication number Publication date
CN104753766A (en) 2015-07-01

Similar Documents

Publication Publication Date Title
CN104753766B (en) Expression sending method and device
CN104408402B (en) Face identification method and device
CN105119812B (en) In the method, apparatus and terminal device of chat interface change emoticon
CN104378441B (en) schedule creation method and device
CN105404863B (en) Character features recognition methods and system
CN104850828A (en) Person identification method and person identification device
CN104021350A (en) Privacy-information hiding method and device
CN105302315A (en) Image processing method and device
CN105117207B (en) Photograph album creation method and device
CN104035558A (en) Terminal device control method and device
CN104615663B (en) File ordering method, apparatus and terminal
CN105224601B (en) A kind of method and apparatus of extracting time information
CN108038102A (en) Recommendation method, apparatus, terminal and the storage medium of facial expression image
CN104731880A (en) Image ordering method and device
CN106406562A (en) Data processing method and device
CN106789551B (en) Conversation message methods of exhibiting and device
CN105335714B (en) Photo processing method, device and equipment
CN105430146A (en) Telephone number identification method and device
CN109388699A (en) Input method, device, equipment and storage medium
CN106547850A (en) Expression annotation method and device
CN105224644A (en) Information classification approach and device
CN105704322B (en) Weather information acquisition methods and device
CN106375178A (en) Message display method and device based on instant messaging
CN104850592B (en) The method and apparatus for generating model file
CN105357388B (en) A kind of method and electronic equipment of information recommendation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant