CN104333688B - The device and method of image formation sheet feelings symbol based on shooting - Google Patents

The device and method of image formation sheet feelings symbol based on shooting Download PDF

Info

Publication number
CN104333688B
CN104333688B CN201310645748.2A CN201310645748A CN104333688B CN 104333688 B CN104333688 B CN 104333688B CN 201310645748 A CN201310645748 A CN 201310645748A CN 104333688 B CN104333688 B CN 104333688B
Authority
CN
China
Prior art keywords
emoticon
user
expression
image
message
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310645748.2A
Other languages
Chinese (zh)
Other versions
CN104333688A (en
Inventor
张柏卉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Guangzhou Mobile R&D Center
Samsung Electronics Co Ltd
Original Assignee
Samsung Guangzhou Mobile R&D Center
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Guangzhou Mobile R&D Center, Samsung Electronics Co Ltd filed Critical Samsung Guangzhou Mobile R&D Center
Priority to CN201310645748.2A priority Critical patent/CN104333688B/en
Publication of CN104333688A publication Critical patent/CN104333688A/en
Application granted granted Critical
Publication of CN104333688B publication Critical patent/CN104333688B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)
  • Studio Devices (AREA)

Abstract

A kind of device and method of the image formation sheet feelings symbol based on shooting are provided, the equipment includes:Image acquisition unit, for obtaining the image shot by filming apparatus;Preview unit for generating the expression effect figure for preview based on the image of acquisition, and shows expression effect figure to user;Emoticon generation unit, for, to generate the emoticon for output, and the emoticon of generation being added in message input by user based on expression effect figure.

Description

The device and method of image formation sheet feelings symbol based on shooting
Technical field
This application involves emoticon input technology more particularly to a kind of images based on shooting during user inputs message To generate and input the device and method of emoticon in real time.
Background technology
In the prior art, when user wants the message in input(For example, in the interface of chat software or social media The message of input)During middle insertion emoticon, which can only choose the emoticon for wishing to be inserted into from scheduled emoticon library.This In, emoticon library is the emoticon set preset or downloaded by network in application software, and emoticon therein is issued for software Person or emoticon producer is designed static or motion graphics.
Although having there is special software to be used for making personalized emoticon at present, these emoticon need to make in advance It performs, is then copied in the form of picture in the message of input, substantially still have duplication and the gluing treatment of picture.This Outside, although certain chat softwares support the picture catching in video calling, this is only limitted to shooting image to obtain corresponding figure Piece file, these pictures can not be edited together with word, and transmission gets up to expend larger data traffic, it is also desirable to longer Time.
Invention content
Exemplary embodiment of the present be designed to provide it is a kind of can user input message during based on shooting Image formation sheet feelings accord with, and input the device and method of the emoticon.
According to an aspect of the present invention, it provides a kind of for the image formation sheet feelings based on shooting when user inputs message The equipment of symbol, including:Image acquisition unit, for obtaining the image shot by filming apparatus;Preview unit, for being based on obtaining Image to generate the expression effect figure for preview, and show expression effect figure to user;Emoticon generation unit, for base In expression effect figure to generate the emoticon for output, and the emoticon of generation is added in message input by user.
The equipment can further include:Filming apparatus, for shooting image.
In the apparatus, the filming apparatus can shoot instruction according to the expression of user and open to start shooting figure Picture.
In the apparatus, the expression shooting instruction of the user may include at least one of following item:With touch side Formula is clicked to double-click any position in the input frame for inputting message, with touch manner or by mouse by mouse Shooting menu item or shooting push button, slide being carried out in the region of the input frame, with voice mode in the input frame Input expression shooting order.
In the apparatus, the filming apparatus can start when user enters the interface for inputting message or in user It opens to start to shoot image when inputting message.
In the apparatus, the filming apparatus may include front camera or rear camera, also, image acquisition list Member can obtain the image by front camera or rear camera shooting;Alternatively, the filming apparatus may include front camera Both with rear camera, also, image acquisition unit can obtain the figure respectively shot by front camera and rear camera As the image after synthesis.
In the apparatus, preview unit can be generated by the way that the image of acquisition is embedded in predetermined expression frame for preview Expression effect figure, and the presumptive area that expression effect figure is displayed on the screen.
In the apparatus, the presumptive area can be located in the input frame for inputting message or positioned at independently of institute In the preview window stated input frame and be separately provided.
In the apparatus, the predetermined expression frame can be open circles.
In the apparatus, preview unit can be generated by the way that the image of acquisition is embedded in predetermined expression frame for preview Expression effect figure includes expression effect figure in the position where cursor to replace the display of cursor.
In the apparatus, preview unit can be generated by the way that the image of acquisition is embedded in predetermined expression frame for preview Expression effect figure additionally includes expression effect figure around cursor.
In the apparatus, when cursor is located at the end for having inputted message, preview unit can show expression effect figure Behind cursor, when cursor is not at having inputted the end of message, preview unit can directly show expression effect figure Expression effect figure is included the next line in cursor by the next line in cursor or the preview instruction according to user.
In the apparatus, emoticon generation unit can obtain the expression effect figure of user's confirmation, the expression based on acquisition The emoticon of generation is added in message input by user by design sketch to generate the emoticon for output.
In the apparatus, emoticon generation unit can obtain the table that user is confirmed by least one in following operation Feelings design sketch:Expression design sketch is clicked with touch manner or by mouse;Click be arranged on confirmation menu item in screen or ACK button;Press acknowledgement key or side switch;With voice mode input validation order;Screen is clicked with touch manner or by mouse Any position in curtain.
In the apparatus, in preview unit after user shows expression effect figure, when user with touch manner or is led to Mouse is crossed come when clicking any position in screen, emoticon generation unit can convert expression effect figure at this time with life It is added in message input by user into the emoticon for output, and by the emoticon of generation.
In the apparatus, after the emoticon of generation is added in message input by user, alternatively, when user is defeated When entering to abandon emoticon instruction, filming apparatus can close, and cursor can be re-displayed.
The equipment can further include:Emoticon storage unit, for store be in advance based on shooting image and generate to A few standard emoticon.
In the apparatus, emoticon generation unit can carry out expression effect figure and at least one standard emoticon Compare, when emoticon generation unit determines one or more of expression effect figure and at least one standard emoticon standard When similarity between emoticon is more than threshold value, emoticon generation unit is using a most similar standard emoticon as being used for The emoticon of output is added in message input by user.
In the apparatus, emoticon generation unit can add instruction according to the expression of user and add the emoticon of generation For standard emoticon.
In the apparatus, the image of acquisition can refer to the either statically or dynamically facial expression image of user.
The equipment can further include:Prompt unit for analyzing message input by user, and is carried according to the result of analysis Show that user makes corresponding expression.
In the apparatus, filming apparatus can shoot the expression of user according to expression method for tracing.
According to another aspect of the present invention, it provides a kind of for the image formation sheet based on shooting when user inputs message The method of feelings symbol, including:Obtain the image of shooting;Expression effect figure for preview is generated based on the image of acquisition, and to User shows expression effect figure;Based on expression effect figure to generate the emoticon for output, and the emoticon of generation is added Into message input by user.
In the method, it before the image for obtaining shooting, can further include:Shoot image.
In the method, the step of shooting image may include:Instruction is shot according to the expression of user to open shooting dress It puts to start to shoot image.
In the method, the expression shooting instruction of the user may include at least one of following item:With touch side Formula is clicked to double-click any position in the input frame for inputting message, with touch manner or by mouse by mouse Shooting menu item or shooting push button, slide being carried out in the region of the input frame, with voice mode in the input frame Input expression shooting order.
In the method, the step of shooting image may include:When user enters interface for inputting message or Filming apparatus is opened when user starts to input message to start to shoot image.
In the method, the step of shooting image may include:The front camera that included by the filming apparatus or after Camera is put to include the step of shooting image, also, obtain the image of shooting:It obtains by front camera or rear camera The image of shooting;Alternatively, the step of shooting image, includes:The front camera and rear camera included by the filming apparatus The two includes the step of shooting image, also, obtain the image of shooting:It obtains by front camera and rear camera respectively Image after the image synthesis of shooting.
In the method, the expression effect for preview can be generated by the way that the image of acquisition is embedded in predetermined expression frame Figure, and the presumptive area that expression effect figure is displayed on the screen.
In the method, the presumptive area can be located in the input frame for inputting message or positioned at independently of institute In the preview window stated input frame and be separately provided.
In the method, the predetermined expression frame can be open circles.
In the method, the expression effect for preview can be generated by the way that the image of acquisition is embedded in predetermined expression frame Figure includes expression effect figure in the position where cursor to replace the display of cursor.
In the method, the expression effect for preview can be generated by the way that the image of acquisition is embedded in predetermined expression frame Figure additionally includes expression effect figure around cursor.
In the method, when cursor is located at the end for having inputted message, expression effect figure can be included in cursor Below, when cursor is not at having inputted the end of message, can directly by expression effect figure include cursor next line or Expression effect figure is included by the next line in cursor according to the preview of user instruction.
In the method, the step of generating the emoticon for output may include:Obtain the expression effect that user confirms Figure, expression effect figure based on acquisition generate the emoticon for output.
In the method, in the step of obtaining the expression effect figure that user confirms, user can be obtained and pass through following behaviour At least one expression effect figure confirmed in work:Expression design sketch is clicked with touch manner or by mouse;Click setting Confirmation menu item or ACK button in screen;Press acknowledgement key or side switch;With voice mode input validation order;To touch Mode clicks any position in screen by mouse.
In the method, to user show expression effect figure after, when user with touch manner or by mouse come When clicking any position in screen, expression effect figure at this time can be converted to generate the emoticon for output, and The emoticon of generation is added in message input by user.
The method can further include:After the emoticon of generation is added in message input by user, alternatively, when using When emoticon instruction is abandoned in family input, filming apparatus, and display highlighting again are closed.
The method can further include:At least one standard emoticon that storage is in advance based on the image of shooting and generates.
In the method, based on expression effect figure to generate the emoticon for output, and the emoticon of generation is added The step being added in message input by user may include:Expression effect figure and at least one standard emoticon are compared Compared with similar between one or more of expression effect figure and at least one standard emoticon standard emoticon is determined When degree is more than threshold value, it is added to input by user disappear using a most similar standard emoticon as the emoticon for output In breath.
The method can further include:The emoticon of generation is added to by standard expression according to the addition instruction of the expression of user Symbol.
In the method, the image of acquisition can refer to the either statically or dynamically facial expression image of user.
The method can further include:It analyzes message input by user, and user is prompted to make phase according to the result of analysis The expression answered.
In the method, the step of shooting image may include:The expression of user is shot according to expression method for tracing.
It, can be real during user inputs message in device and method according to an exemplary embodiment of the present invention When be inserted into the emoticon based on shooting image generation, not only enrich the content of emoticon in this way, but also cause the figure of shooting As after emoticon is converted to can the layout together with the word in message, also save flow and time in terms of transmission.
Description of the drawings
By with reference to be exemplarily illustrated embodiment attached drawing carry out description, exemplary embodiment of the present it is upper It states and will become apparent with other purposes and feature, wherein:
Fig. 1 shows the image generation according to an exemplary embodiment of the present invention for when user inputs message based on shooting The block diagram of the equipment of emoticon;
Fig. 2 shows the examples of the composograph according to an exemplary embodiment of the present invention obtained by image acquisition unit;
Fig. 3 shows that the image according to an exemplary embodiment of the present invention by shooting is embedded in the example of predetermined expression frame;
Fig. 4 shows the image generation according to an exemplary embodiment of the present invention for when user inputs message based on shooting The flow chart of the method for emoticon;
Fig. 5 shows according to an exemplary embodiment of the present invention to generate equipment by emoticon and generate the stream of the processing of emoticon Cheng Tu;
Fig. 6 shows to generate equipment by emoticon according to another exemplary embodiment of the present invention to generate the processing of emoticon Flow chart;
Fig. 7 shows to generate equipment by emoticon according to another exemplary embodiment of the present invention to generate the processing of emoticon Flow chart;
Fig. 8 shows the example of the expression effect figure according to an exemplary embodiment of the present invention for preview;
Fig. 9 shows the example of the expression effect figure for preview according to another exemplary embodiment of the present invention;
Figure 10 shows the example of the expression effect figure for preview according to another exemplary embodiment of the present invention;
Figure 11 shows the example of the expression effect figure for preview according to another exemplary embodiment of the present invention;
Figure 12 shows the example of standard emoticon according to an exemplary embodiment of the present invention.
Specific embodiment
Hereinafter, with reference to the accompanying drawings to the exemplary embodiment that the present invention will be described in detail, in the accompanying drawings, identical label is always Refer to identical component.
Fig. 1 shows the image generation according to an exemplary embodiment of the present invention for when user inputs message based on shooting The block diagram of the equipment of emoticon.As an example, in user in the various electricity such as personal computer, smart mobile phone, tablet computer During inputting message in sub- product, emoticon generation equipment shown in FIG. 1 can be used for generating expression based on the image of shooting Symbol.
As shown in Figure 1, emoticon generation equipment includes:Image acquisition unit 10, for obtaining what is shot by filming apparatus 5 Image;Preview unit 20 for generating the expression effect figure for preview based on the image of acquisition, and shows expression to user Design sketch;Emoticon generation unit 30, for generating the emoticon for output based on expression effect figure, and by the table of generation Feelings symbol is added in message input by user.Here, the filming apparatus 5 may include in emoticon generates equipment, can also make The peripheral equipment of equipment is generated to be connected to the emoticon.Above-mentioned image acquisition unit 10, preview unit 20 and emoticon life It can be realized into unit 30 by the common hardwares such as digital signal processor, field programmable gate array processor, it also can be by special It is realized, can also be realized completely by computer program with software mode, for example, being implemented with dedicated hardware processors such as chips For the modules in the chat software or social media class software that are mounted in electronic product.
Emoticon generation equipment shown in FIG. 1 can be converted to the image of shooting during user inputs message Emoticon form, and be inserted into the message of input, so as to fulfill the layout together with word, and save transmission flow and time.
Particularly, the image that image acquisition unit 10 is shot for acquisition by filming apparatus 5.Here, as preferred side Formula, the image of acquisition can be the expression of user in terms of content, can be still image or dynamic image in form.It is shooting Device 5 is opened to start to shoot after image puts, and image acquisition unit 10 can automatically obtain what is shot by filming apparatus 5 in real time Image obtains the image shot by filming apparatus 5 according to the control of user.For example, filming apparatus 5 can shoot continuous movement Image(That is, video), correspondingly, image acquisition unit 10 can obtain regarding of taking automatically or according to the control of user Frequently or the static single image that takes can be intercepted according to predetermined space or can be by continuously intercepting several in the given time The dynamic image of such as gif forms is synthesized after still image.Filming apparatus 5 itself can also be arranged to make a reservation for Interval shooting static state single image is continuously shot several still images to synthesize dynamic image in the given time.
Expression tracking can be used in the facial expression image for shooting user in filming apparatus 5 as additional component, so as to Accurately capture the expression of user.Here, as another additional component, emoticon generation equipment shown in FIG. 1 can further include: Prompt unit(It is not shown), for analyzing message input by user, and user is prompted to make accordingly according to the result of analysis Expression.Particularly, prompt unit can analyze the meaning of input message according to semantic analysis technology, and in filming apparatus 5 The later prompting user of unlatching makes expression corresponding with analysis result.For example, when message input by user is " happy dead I " When, the mood that prompt unit can analyze input message is happy, and is then made for example, by the forms such as voice prompting user Happy expression, for example, with voice output " say cheese ".Alternatively, when message input by user is " I is hard hit ", prompt single The mood that member can analyze input message is sad, and then makes sad table for example, by the forms such as voice prompting user Feelings, for example, with voice output " sad expression please be make ".In addition, the prompt unit can not also have semantic analysis function, And only expression shooting will be carried out to user speech prompting in the rear of 5 unlatching of filming apparatus.
Filming apparatus 5 may include front camera or rear camera, and in this case, image acquisition unit 10 will obtain Take the image shot by front camera or the image shot by rear camera.In addition, filming apparatus 5 may include preposition camera shooting Both head and rear camera, in this case, image acquisition unit 10 will be obtained by front camera and rear camera Image after the image synthesis respectively shot.Fig. 2 shows obtained according to an exemplary embodiment of the present by image acquisition unit 10 Composograph example.As shown in Fig. 2, Ms's head portrait by front camera shooting and the child by rear camera shooting Image is combined in via image acquisition unit 10 in single image.
As an example, filming apparatus 5 can shoot instruction according to the expression of user and open to start to shoot image.Specifically Come, during user inputs message, when user wants to be inserted into the emoticon based on shooting image, user can input expression Shooting instruction, for example, user can input expression shooting instruction by least one in following operation:With touch manner or lead to It crosses mouse and is clicked to double-click any position in the input frame for inputting message, with touch manner or by mouse and is described defeated Enter shooting menu item or shooting push button in frame, carry out slide in the region of the input frame(For example, it by finger or touches Touch pen etc.), with voice mode input expression shooting order.In this case, image acquisition unit 10 can be obtained automatically in real time Take the image shot by filming apparatus 5.After the emoticon based on shooting image is generated and is added in the message of input, Alternatively, when emoticon instruction is abandoned in user's input, filming apparatus 5 can close.
As another example, filming apparatus 5 can start defeated when user enters the interface for inputting message or in user It opens to start to shoot image when entering message.In this case, filming apparatus 5 can be during user inputs message(For example, During using chat software)It is lasting to open.It is shot at this point, image acquisition unit 10 can be obtained automatically in real time by filming apparatus 5 Image or the image shot by filming apparatus 5 is obtained according to the control of user.
In addition, preview unit 20 is used to generate the expression effect figure for preview based on the image of acquisition, and to user Show expression effect figure.For example, preview unit 20 can be generated by the way that the image of acquisition is embedded in predetermined expression frame for preview Expression effect figure, and the presumptive area that expression effect figure is displayed on the screen.As an example, the predetermined expression frame can be with It is open circles, however, it should be understood that open circles are not intended to limit the scope of the present invention.Fig. 3 is shown according to the exemplary reality of the present invention Apply the example that the image of shooting is embedded in predetermined expression frame by example.As shown in figure 3, various expression frame next life of different shapes can be used Into expression effect figure.Here, the specification of expression frame(Such as, size, length, width etc.)Expression can be converted to as the image obtained The constraints of symbol.In addition, the presumptive area can be located at for inputting in the input frame of message, for example, the position where cursor Put or cursor around, alternatively, the presumptive area can be located at be separately provided independently of for inputting the input frame of message In the preview window, it is preferred that, the size and location of the preview window by user setting and can adjust.
In addition, emoticon generation unit 30 is used to generate the emoticon for output, and will be raw based on expression effect figure Into emoticon be added in message input by user.
As an example, emoticon generation unit 30 can obtain the expression effect figure of user's confirmation, the expression effect based on acquisition The emoticon of generation is added to message input by user by fruit figure to generate the emoticon for output(For example, cursor is current The position at place)In.Here, the expression effect figure that user confirms is converted to the form of emoticon by emoticon generation unit 30 (Meet scheduled length and width), so as to which transformed emoticon can be inserted into message input by user to realize and word one Layout is played, and less data flow and time are needed compared with transmitting picture in itself.
For example, emoticon generation unit 30 can obtain the expression effect that user is confirmed by least one in following operation Figure:Expression design sketch is clicked with touch manner or by mouse;Click be arranged on confirmation menu item in screen or confirm by Button;Press acknowledgement key or side switch;With voice mode input validation order;It is clicked in screen with touch manner or by mouse Any position.By setting appropriate confirmation operation in varied situations, user can be while expression be taken, easily really Recognize expression preview effect based on the emoticon ultimately generated.
As another example, instruction is shot according to the expression of user in filming apparatus 5 and opens to start to shoot image In the case of, emoticon generation unit 30 can not need to receive the expression effect figure of its confirmation from user, but based on expression effect Degree of approximation between figure and pre-stored standard emoticon directly is picked out the standard emoticon as output emoticon.
Particularly, in this case, emoticon generation equipment shown in FIG. 1 may also include emoticon storage unit (It is not shown), at least one standard emoticon that storage is in advance based on the image of shooting and generates.As an example, the mark Quasi- emoticon can be favorite expression of the typical expression for representing different moods, user etc., and can constantly update, that is, will be follow-up The emoticon of generation is added to original mark in emoticon storage unit or in update emoticon storage unit as standard expression Quasi- expression.For example, it can be handled by the image to shooting(Extract the facial expression part in image, the part to extraction Perform the processing such as scaling)Meet the standard emoticon of emoticon form with generation, these standard emoticon can be stored in specially Standard emoticon library in, alternatively can be stored in the emoticon library of acquiescence.Figure 12 shows to add the standard emoticon of generation To the example in existing acquiescence emoticon library.In figure 12 it can be seen that most latter two emoticon is new by self-defined mode The standard emoticon of addition.
Correspondingly, emoticon generation unit 30 can be by least one in the expression effect figure of preview and emoticon storage unit A standard emoticon is compared, when emoticon generation unit 30 determines expression effect figure and at least one standard emoticon One or more of similarity between standard emoticon be more than threshold value(For example, similarity is more than 80%)When, emoticon life It can be added to message input by user using a most similar standard emoticon as the emoticon for output into unit 30 In.Here, emoticon generation unit 30 can based on textural characteristics, color character or brightness etc. come compare expression design sketch with Similitude between standard emoticon, and obtain the similarity numerical value of reflection degree of similarity.
Corresponding emoticon can be generated by the above-mentioned means, not needing to user and carrying out confirmation operation, not only reflects bat The specific expression taken the photograph, and the speed of generation emoticon is accelerated, and simplify user's operation.
Hereinafter, according to an exemplary embodiment of the present invention be used in user's input message will be described with reference to Fig. 4 to Figure 11 When based on shooting image formation sheet feelings symbol method.The method can be as shown in Figure 1 emoticon generation equipment complete, Also it can be realized by computer program.For example, the method can by be mounted in electronic product for inputting message Using performing.
Fig. 4 shows the image generation according to an exemplary embodiment of the present invention for when user inputs message based on shooting The flow chart of the method for emoticon.As an example, according to method shown in Fig. 4, in user in such as personal computer, intelligent hand During inputting message in the various electronic products such as machine, tablet computer, can emoticon be generated based on the image of shooting.
With reference to Fig. 4, in step S10, the image of shooting is obtained.Here, it is preferred that, the image of acquisition is in terms of content It can be the expression of user, can be still image or dynamic image in form.Particularly, it opens to open in filming apparatus Begin after shooting image puts, can automatically obtain the image that is shot by filming apparatus in real time or obtained according to the control of user The image shot by filming apparatus.For example, filming apparatus can shoot continuous moving image(That is, video), correspondingly, in step S10 can control to obtain the video taken or can take according to predetermined space interception quiet automatically or according to user State single image can be by being synthesized such as gif forms after continuously intercepting several still images in the given time Dynamic image.In addition, filming apparatus can be also set as in itself shooting static single image or in pre- timing according to predetermined space It is interior to be continuously shot several still images to synthesize dynamic image.
As additional step, before step S10, the method described in Fig. 4 can further include:Shoot image.Here, it can be used Image is shot as the filming apparatus of built-in unit or peripheral equipment(Such as, the either statically or dynamically facial expression image of user).Make For example, in the facial expression image for shooting user, expression tracking can be used, accurately to capture the expression of user.
Here, as another additional step, emoticon generation method shown in Fig. 4 can further include:It analyzes input by user Message, and user is prompted to make corresponding expression according to the result of analysis.Particularly, can be divided according to semantic analysis technology The meaning of input message is precipitated, and expression corresponding with analysis result is made in the later prompting user of filming apparatus unlatching.Example Such as, when message input by user is " happy dead I ", the mood that can analyze input message is happy, and then for example, by The forms such as voice prompting user makes happy expression, for example, with voice output " say cheese ".Alternatively, input by user it ought disappear When breath is " I is hard hit ", the mood that can analyze input message is sad, and then prompts to use for example, by forms such as voices Sad expression is made at family, for example, with voice output " please make sad expression ".In addition, the prompting step can not also wrap Semantic analysis processing is included, and only will carry out expression shooting to user speech prompting in the rear of filming apparatus unlatching.
Exemplary embodiment according to the present invention, filming apparatus may include front camera or rear camera, this In the case of, in step S10, the image shot by front camera or the image shot by rear camera will be obtained.In addition, it claps Taking the photograph device may include both front camera and rear camera, in this case, in step S10, be taken the photograph obtaining by preposition Image after the image synthesis respectively shot as head and rear camera.
As an example, in step S10, instruction can be shot according to the expression of user to open filming apparatus to start shooting figure Picture.Particularly, during user inputs message, when user wants to be inserted into the emoticon based on shooting image, user Expression shooting instruction can be inputted, for example, user can input expression shooting instruction by least one in following operation:To touch Touch mode or by mouse come double-click any position in the input frame for inputting message, with touch manner or by mouse come It clicks shooting menu item or shooting push button in the input frame, slide carried out in the region of the input frame, with voice Mode inputs expression shooting order.In this case, the image shot by filming apparatus can be automatically obtained in real time.Accordingly Ground, after the emoticon based on shooting image is generated and is added in the message of input, alternatively, when table is abandoned in user's input During feelings symbol instruction, filming apparatus can be closed.
As another example, in step S10, can start when user enters the interface for inputting message or in user defeated Filming apparatus is opened when entering message to start to shoot image.In this case, filming apparatus can be during user inputs message (For example, during chat software is used)It is lasting to open.At this point, the image shot by filming apparatus can be automatically obtained in real time, Or the image shot by filming apparatus is obtained according to the control of user.
Next, in step S20, the expression effect figure for preview is generated, and show to user based on the image of acquisition Show expression effect figure.For example, the expression effect figure for preview can be generated by the way that the image of acquisition is embedded in predetermined expression frame, And the presumptive area for being displayed on the screen expression effect figure.As an example, the predetermined expression frame can be open circles, so And, it should be appreciated that open circles are not intended to limit the scope of the present invention, and various expression frames of different shapes can be used to generate expression Design sketch.Here, the specification of expression frame(Such as, size, length, width etc.)The pact of emoticon can be converted to as the image obtained Beam condition.In addition, the presumptive area can be located at for inputting in the input frame of message, for example, position where cursor or Around cursor, alternatively, the presumptive area can be located at independently of the preview window being separately provided for inputting the input frame of message In mouthful, it is preferred that, the size and location of the preview window by user setting and can adjust.
Then, in step S30, the emoticon for output is generated based on expression effect figure, and by the emoticon of generation It is added in message input by user.
As an example, in step S30, the expression effect figure of user's confirmation can be obtained, the expression effect figure based on acquisition come Generation is used for the emoticon of output, and the emoticon of generation is added to message input by user(For example, what cursor was currently located Position)In.Here, the expression effect figure that user confirms is converted to the form of emoticon(Meet scheduled length and width), So as to which transformed emoticon can be inserted into realize the layout together with word in message input by user, and with transmitting picture sheet Body, which is compared, needs less data flow and time.
For example, the expression effect figure that user is confirmed by least one in following operation can be obtained:With touch manner or Expression design sketch is clicked by mouse;Click confirmation menu item or the ACK button being arranged in screen;Press acknowledgement key or Side switch;With voice mode input validation order;Any position in screen is clicked with touch manner or by mouse.By Appropriate confirmation operation is set under different situations, and user can easily confirm the table ultimately generated while expression is taken Expression preview effect based on feelings symbol.
As another example, instruction is shot according to the expression of user in filming apparatus and opens the feelings to start to shoot image Under condition, it may be unnecessary to the expression effect figure of its confirmation is received from user, but based on expression effect figure and pre-stored standard Degree of approximation between emoticon directly is picked out the standard emoticon as output emoticon.
Particularly, in this case, emoticon generation method shown in Fig. 4 can also include the steps of:Storage is pre- At least one standard emoticon first generated based on the image of shooting.As an example, the standard emoticon can be represented Favorite expression of the typical expression of different moods, user etc., and can constantly update, that is, using the emoticon being subsequently generated as mark Quasi- expression is stored or is updated original standard expression.
It correspondingly, can be by least one of the expression effect figure of preview and emoticon storage unit standard in step S30 Emoticon is compared, when determining one or more of expression effect figure and at least one standard emoticon standard expression Similarity between symbol is more than threshold value(For example, similarity is more than 80%)When, a most similar standard emoticon can be made Emoticon to be used to export is added in message input by user.Here, textural characteristics, color character or brightness spy can be based on Sign etc. compares the similitude between expression design sketch and standard emoticon, and obtains the similar number of degrees of reflection degree of similarity Value.
Corresponding emoticon can be generated by the above-mentioned means, not needing to user and carrying out confirmation operation, not only reflects bat The specific expression taken the photograph, and the speed of generation emoticon is accelerated, and simplify user's operation.
In emoticon generation method shown in Fig. 4, the image of shooting can be turned during user inputs message Be changed to emoticon form, and be inserted into the message of input, so as to fulfill the layout together with word, and save transmission flow and when Between.
Fig. 5 shows according to an exemplary embodiment of the present invention to generate equipment by emoticon and generate the stream of the processing of emoticon Cheng Tu.
With reference to Fig. 5, in step S101, during user inputs message, the expression that user is received by filming apparatus 5 is shot Instruction.As an example, the expression shooting instruction of the user may include at least one of following item:With touch manner or pass through Mouse to double-click any position in the input frame for inputting message, the input is clicked with touch manner or by mouse Shooting menu item or shooting push button, carry out slide in the region of the input frame, expression is inputted with voice mode in frame Shooting order.
After filming apparatus 5 receives the expression shooting instruction of user, in step S102, filming apparatus 5 is opened to open Begin shooting image.As an example, filming apparatus 5 can shoot the facial expression image of user.
Then, in step S103, the image shot by filming apparatus 5, here, acquisition are obtained by image acquisition unit 10 Image can be the dynamic image after the video of user's expression, single width still image or multiple image synthesis.
In step S104, the expression effect figure for preview is generated based on the image of acquisition by preview unit 20, for example, Preview unit 20 can be by being embedded in predetermined expression frame by the image obtained(For example, open circles)To generate the expression effect for preview Fruit is schemed.In step S105, the expression effect figure generated by preview unit 20 replaces display of the cursor in message is inputted.As Example, open circles here can be by current(That is, when user inputs expression shooting instruction)Cursor convert, that is, it is current Cursor becomes open circles, and the image obtained is previewed unit 20 and is embedded in the open circles, so as to the display of expression effect figure Instead of the display of cursor.The example of expression effect figure is as shown in figure 8, Fig. 8 respectively illustrates cursor in message end and message The expression effect figure generated under intermediate both of these case.
Next, in step S106, the table for output is generated based on expression effect figure by emoticon generation unit 30 Feelings accord with, and the emoticon of generation is added to message input by user(That is, user, which inputs the expression shooting instruction time, is marked on input Position in message)In.Here, it as an example, emoticon generation unit 30 can obtain the expression effect figure of user's confirmation, is based on The expression effect figure of acquisition generates the emoticon for output.Particularly, in preview unit 20 expression effect is shown to user After fruit figure, when user with touch manner or by mouse come when clicking any position in screen, emoticon generation unit 30 Expression effect figure at this time is converted into the emoticon for output, and the emoticon of generation is added to message input by user In, that is, cursor is converted to the position at place before open circles.As another example, emoticon generation unit 30 can be by expression effect Figure with storage at least one standard emoticon be compared, when emoticon generation unit 30 determine expression effect figure and it is described extremely When similarity between one or more of few standard emoticon standard emoticon is more than threshold value, emoticon generation unit 30 are added to using a most similar standard emoticon as the emoticon for output in message input by user.
After emoticon is generated and is added to message, in step S107, filming apparatus 5 is closed, then, in step S108, cursor are re-displayed position when user inputs expression shooting instruction.
In processing shown in Fig. 5, only when user wishes to input emoticon, filming apparatus 5 can just be opened, in expression Symbol is generated and is added to after message, and filming apparatus 5 is closed.In addition, in processing shown in Fig. 5, in 5 unlatching of filming apparatus Afterwards, when emoticon instruction is abandoned in user's input, filming apparatus 5 is closed immediately, and the processing for generating emoticon also can be corresponding Ground terminates, and cursor restores display.
Fig. 6 shows to generate equipment by emoticon according to another exemplary embodiment of the present invention to generate the processing of emoticon Flow chart.
With reference to Fig. 6, in step S111, user, which enters, to be started to input message for the interface or user for inputting message.
Next, in step S112, filming apparatus 5 is opened to start to shoot image.As an example, filming apparatus 5 can be clapped Take the photograph the facial expression image of user.
Then, in step S113, the image shot by filming apparatus 5, here, acquisition are obtained by image acquisition unit 10 Image can be the dynamic image after the video of user's expression, single width still image or multiple image synthesis.
In step S114, the expression effect figure for preview is generated based on the image of acquisition by preview unit 20, for example, Preview unit 20 can be by being embedded in predetermined expression frame by the image obtained(For example, open circles)To generate the expression effect for preview Fruit is schemed.
In step S115, determine cursor whether in the end of input message by preview unit 20.If in step S115 It determines that cursor is in the end of input message, then in step S116, is included the expression effect figure of generation by preview unit 20 Behind cursor.If determine that cursor is not on the end of input message in step S115, in step S117, by preview list The expression effect figure of generation is included the next line in cursor by member 20.The example of expression effect figure is as shown in figure 9, Fig. 9 shows respectively The expression effect figure that cursor generates under both of these case among message end and message is gone out.
Next, in step S118, the table for output is generated based on expression effect figure by emoticon generation unit 30 Feelings accord with, and the emoticon of generation is added to message input by user(That is, the current position in message is inputted of cursor)In.This In, as an example, emoticon generation unit 30 can obtain user confirmation expression effect figure, the expression effect figure based on acquisition come Generation is used for the emoticon of output.Particularly, in preview unit 20 after user shows expression effect figure, when user is with tactile Touch mode or by mouse come when clicking any position in screen, emoticon generation unit 30 turns expression effect figure at this time The emoticon of output is exchanged for, and the emoticon of generation is added in message input by user, that is, what cursor was currently located Position.
In processing shown in Fig. 6, if user's inbound message input interface or start input message, filming apparatus 5 is just It can open and continue to shoot.When cursor is not in the end of input message, then expression preview graph is included in cursor next line Position.
However, in fact, if cursor is moved to the position among message by user from end, many times it is not intended to Emoticon is inserted into, but in order to change text.In such a case, it is possible to expression preview graph is not shown.
For this purpose, Fig. 7 shows to generate equipment by emoticon according to another exemplary embodiment of the present invention to generate emoticon Processing flow chart.
Fig. 7 shows to generate equipment by emoticon according to another exemplary embodiment of the present invention to generate the processing of emoticon Flow chart.
With reference to Fig. 7, in step S121, user, which enters, to be started to input message for the interface or user for inputting message.
Next, in step S122, filming apparatus 5 is opened to start to shoot image.As an example, filming apparatus 5 can be clapped Take the photograph the facial expression image of user.
Then, in step S123, the image shot by filming apparatus 5, here, acquisition are obtained by image acquisition unit 10 Image can be the dynamic image after the video of user's expression, single width still image or multiple image synthesis.
In step S124, the expression effect figure for preview is generated based on the image of acquisition by preview unit 20, for example, Preview unit 20 can be by being embedded in predetermined expression frame by the image obtained(For example, open circles)To generate the expression effect for preview Fruit is schemed.
In step S125, determine cursor whether in the end of input message by preview unit 20.If in step S125 It determines that cursor is in the end of input message, then in step S126, is included the expression effect figure of generation by preview unit 20 Behind cursor.If determine that cursor is not on the end of input message in step S125, in step S127, by preview list Whether first 20 determining users have input preview instruction.When preview unit 20 determines that user has input preview instruction in step S127 When, in step S128, the expression effect figure of generation is included into the next line in cursor by preview unit 20.Expression effect figure shows Such as shown in Figure 10, Figure 10 respectively illustrates the example and cursor that cursor generates expression effect figure in the case of message end Between in the message and user does not generate the example of expression effect figure in the case of not inputting browsing instruction.
Next, in step S129, the table for output is generated based on expression effect figure by emoticon generation unit 30 Feelings accord with, and the emoticon of generation is added to message input by user(That is, the current position in message is inputted of cursor)In.This In, as an example, emoticon generation unit 30 can obtain user confirmation expression effect figure, the expression effect figure based on acquisition come Generation is used for the emoticon of output.Particularly, in preview unit 20 after user shows expression effect figure, when user is with tactile Touch mode or by mouse come when clicking any position in screen, emoticon generation unit 30 turns expression effect figure at this time The emoticon of output is exchanged for, and the emoticon of generation is added in message input by user, that is, what cursor was currently located Position.
In processing shown in Fig. 7, if user's inbound message input interface or start input message, filming apparatus 5 is just It can open and continue to shoot.When cursor is not in the end of input message, then the browsing instruction for receiving user is needed just will Expression preview graph is shown in the position of cursor next line.
Above-mentioned example generates device and method just for the sake of explanation emoticon according to an exemplary embodiment of the present invention, and It is not construed as limiting the invention.As described above, a variety of different instruction input modes, figure can be used in those skilled in the art Implement the present invention as validation testing etc..For example, other than showing expression effect figure as shown in Fig. 8 to Figure 10, may not be used also Consider the position of cursor and include expression effect figure in the preview window being separately provided independently of message input frame.Such as figure Shown in 11, expression effect figure is displayed in the preview window, wherein, the size and location of the preview window can by user setting or Adjustment.In addition, after user confirmed some expression effect figure by clicking any position in the preview window, emoticon Generation unit 30 can be according to the parameter for meeting emoticon form(Such as, size and shape etc.)Expression design sketch is converted, so as to raw It is added to the position of cursor finally in the message into the emoticon for output, and by the emoticon of generation.
It can be seen that according to above-mentioned referring to figs. 1 to Figure 12 description of this invention according to an exemplary embodiment of the present Emoticon generation device and method in, can captured in real-time image be needed according to user(For example, the facial expression image of user), and And described image is converted into emoticon, thus part layout and transmission together with word as user's input content, this So that user can not only add personalized Dare expression in real time, time and the flow of transmission emoticon are also saved, improves use The chat experience at family.In addition, it is also provided with unique shooting processing, preview processing, confirmation processing(As by clicking on any position To confirm the preview design sketch of shooting)With emoticon generation processing etc., the experience of user can be further enriched, convenient for user's Operation.
It may be noted that according to the needs of implementation, each step described in this application can be split as more multi-step, also may be used The part operation of two or more steps or step is combined into new step, to achieve the object of the present invention.
It is above-mentioned to realize or be implemented as in hardware, firmware according to the method for the present invention to be storable in recording medium (Such as CD ROM, RAM, floppy disk, hard disk or magneto-optic disk)In software or computer code or be implemented through network download Original storage in long-range recording medium or nonvolatile machine readable media and the meter that will be stored in local recording medium Calculation machine code, so as to which method described here can be stored in using all-purpose computer, application specific processor or programmable or specially Use hardware(Such as ASIC or FPGA)Recording medium on such software processing.It is appreciated that computer, processor, micro- Processor controller or programmable hardware include to store or receiving the storage assembly of software or computer code(For example, RAM, ROM, flash memory etc.), when the software or computer code are by computer, processor or hardware access and when performing, realize herein The processing method of description.In addition, when all-purpose computer access is used to implement the code for the processing being shown here, the execution of code All-purpose computer is converted to perform the special purpose computer of processing being shown here.
Although show and describing the present invention with reference to preferred embodiment, it will be understood by those skilled in the art that not In the case of being detached from the spirit and scope of the present invention that are defined by the claims, these embodiments can be carry out various modifications and Transformation.

Claims (22)

1. a kind of equipment for being used for the image formation sheet feelings symbol based on shooting when user inputs message, including:
Prompt unit for analyzing message input by user, and prompts user to make corresponding expression according to the result of analysis;
Image acquisition unit, for obtaining the image shot by filming apparatus;
Preview unit for generating multiple expression effect figures for preview based on the image of acquisition, and shows institute to user State multiple expression effect figures;
Emoticon generation unit generates use for the expression effect figure that is selected in the multiple expression effect figure based on user It is added in message input by user in the emoticon of output, and by the emoticon of generation.
2. equipment as described in claim 1, further includes:
Filming apparatus, for shooting image.
3. equipment as claimed in claim 2, wherein, the filming apparatus shoots instruction according to the expression of user and opens to open Begin shooting image.
4. equipment as claimed in claim 3, wherein, the expression shooting instruction of the user includes at least one in following item It is a:To double-click any position in the input frame for inputting message, with touch manner or lead to touch manner or by mouse Mouse is crossed to click shooting menu item or shooting push button in the input frame, carry out slip behaviour in the region of the input frame Make, with voice mode input expression shooting order.
5. equipment as claimed in claim 2, wherein, the filming apparatus when user enters the interface for inputting message or It opens to start to shoot image when user starts to input message.
6. equipment as claimed in claim 2, wherein, the filming apparatus includes front camera or rear camera, also, Image acquisition unit obtains the image shot by front camera or rear camera;Alternatively, the filming apparatus is including preposition Both camera and rear camera, also, image acquisition unit acquisition is respectively shot by front camera and rear camera Image synthesis after image.
7. equipment as described in claim 1, wherein, preview unit is generated by the way that the image of acquisition is embedded in predetermined expression frame For the expression effect figure of preview, and the presumptive area that expression effect figure is displayed on the screen.
8. equipment as claimed in claim 7, wherein, the presumptive area is located in the input frame for inputting message or position In the preview window being separately provided independently of the input frame.
9. equipment as claimed in claim 7, wherein, the predetermined expression frame is open circles.
10. equipment as claimed in claim 3, wherein, preview unit by the image obtained by being embedded in predetermined expression frame next life Into the expression effect figure for preview, expression effect figure is included in the position where cursor to replace the display of cursor.
11. equipment as claimed in claim 5, wherein, preview unit by the image obtained by being embedded in predetermined expression frame next life Into the expression effect figure for preview, expression effect figure is additionally included around cursor.
12. equipment as claimed in claim 11, wherein, when cursor is located at the end for having inputted message, preview unit is by table Feelings design sketch is shown in behind cursor, and when cursor is not at having inputted the end of message, preview unit is directly by expression Design sketch is shown in the next line of cursor or expression effect figure is included the next line in cursor according to the preview of user instruction.
13. equipment as described in claim 1, wherein, emoticon generation unit obtains the expression effect figure that user confirms, is based on The emoticon of generation is added to message input by user by the expression effect figure of acquisition to generate the emoticon for output In.
14. equipment as claimed in claim 13, wherein, emoticon generation unit obtain user by following operation at least The expression effect figure of one confirmation:Expression design sketch is clicked with touch manner or by mouse;Click is arranged in screen Confirm menu item or ACK button;Press acknowledgement key or side switch;With voice mode input validation order;With touch manner or pass through Mouse clicks any position in screen.
15. equipment as claimed in claim 10, wherein, in preview unit after user shows expression effect figure, work as user With touch manner or by mouse come when clicking any position in screen, emoticon generation unit is by expression effect figure at this time It is converted to generate the emoticon for output, and the emoticon of generation is added in message input by user.
16. equipment as claimed in claim 15, wherein, it is added to it in message input by user in the emoticon of generation Afterwards, alternatively, when emoticon instruction is abandoned in user's input, filming apparatus is closed, and cursor is re-displayed.
17. equipment as claimed in claim 3, further includes:Emoticon storage unit, for storing the image for being in advance based on shooting And at least one standard emoticon generated.
18. equipment as claimed in claim 17, wherein, emoticon generation unit is by expression effect figure and at least one mark Quasi- emoticon is compared, when emoticon generation unit determines expression effect figure and one at least one standard emoticon When similarity between a or multiple standard emoticon is more than threshold value, emoticon generation unit is by a most similar standard scale Feelings symbol is added to as the emoticon for output in message input by user.
19. equipment as claimed in claim 18, wherein, emoticon generation unit will generation according to the expression of user addition instruction Emoticon be added to standard emoticon.
20. equipment as claimed in claim 2, wherein, the image of acquisition refers to the either statically or dynamically facial expression image of user.
21. equipment as claimed in claim 20, wherein, filming apparatus shoots the expression of user according to expression method for tracing.
22. a kind of method for being used for the image formation sheet feelings symbol based on shooting when user inputs message, including:
It analyzes message input by user, and user is prompted to make corresponding expression according to the result of analysis;
Obtain the image of shooting;
Multiple expression effect figures for preview are generated, and show the multiple expression effect to user based on the image of acquisition Figure;
The expression effect figure selected in the multiple expression effect figure based on user generates the emoticon for output, and will The emoticon of generation is added in message input by user.
CN201310645748.2A 2013-12-03 2013-12-03 The device and method of image formation sheet feelings symbol based on shooting Active CN104333688B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310645748.2A CN104333688B (en) 2013-12-03 2013-12-03 The device and method of image formation sheet feelings symbol based on shooting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310645748.2A CN104333688B (en) 2013-12-03 2013-12-03 The device and method of image formation sheet feelings symbol based on shooting

Publications (2)

Publication Number Publication Date
CN104333688A CN104333688A (en) 2015-02-04
CN104333688B true CN104333688B (en) 2018-07-10

Family

ID=52408331

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310645748.2A Active CN104333688B (en) 2013-12-03 2013-12-03 The device and method of image formation sheet feelings symbol based on shooting

Country Status (1)

Country Link
CN (1) CN104333688B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104635930A (en) * 2015-02-09 2015-05-20 联想(北京)有限公司 Information processing method and electronic device
US10594638B2 (en) 2015-02-13 2020-03-17 International Business Machines Corporation Point in time expression of emotion data gathered from a chat session
US10855632B2 (en) 2016-07-19 2020-12-01 Snap Inc. Displaying customized electronic messaging graphics
CN108320316B (en) * 2018-02-11 2022-03-04 秦皇岛中科鸿合信息科技有限公司 Personalized facial expression package manufacturing system and method
CN108596114A (en) * 2018-04-27 2018-09-28 佛山市日日圣科技有限公司 A kind of expression generation method and device
CN109120866B (en) * 2018-09-27 2020-04-03 腾讯科技(深圳)有限公司 Dynamic expression generation method and device, computer readable storage medium and computer equipment
CN111507143B (en) 2019-01-31 2023-06-02 北京字节跳动网络技术有限公司 Expression image effect generation method and device and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1606347A (en) * 2004-11-15 2005-04-13 北京中星微电子有限公司 A video communication method
CN101179471A (en) * 2007-05-31 2008-05-14 腾讯科技(深圳)有限公司 Method and apparatus for implementing user personalized dynamic expression picture with characters
CN101686442A (en) * 2009-08-11 2010-03-31 深圳华为通信技术有限公司 Method and device for achieving user mood sharing by using wireless terminal
CN102193620A (en) * 2010-03-02 2011-09-21 三星电子(中国)研发中心 Input method based on facial expression recognition

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070071288A1 (en) * 2005-09-29 2007-03-29 Quen-Zong Wu Facial features based human face recognition method
US20130159919A1 (en) * 2011-12-19 2013-06-20 Gabriel Leydon Systems and Methods for Identifying and Suggesting Emoticons

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1606347A (en) * 2004-11-15 2005-04-13 北京中星微电子有限公司 A video communication method
CN101179471A (en) * 2007-05-31 2008-05-14 腾讯科技(深圳)有限公司 Method and apparatus for implementing user personalized dynamic expression picture with characters
CN101686442A (en) * 2009-08-11 2010-03-31 深圳华为通信技术有限公司 Method and device for achieving user mood sharing by using wireless terminal
CN102193620A (en) * 2010-03-02 2011-09-21 三星电子(中国)研发中心 Input method based on facial expression recognition

Also Published As

Publication number Publication date
CN104333688A (en) 2015-02-04

Similar Documents

Publication Publication Date Title
CN104333688B (en) The device and method of image formation sheet feelings symbol based on shooting
US12094047B2 (en) Animated emoticon generation method, computer-readable storage medium, and computer device
US11513608B2 (en) Apparatus, method and recording medium for controlling user interface using input image
TWI720062B (en) Voice input method, device and terminal equipment
CN107370887B (en) Expression generation method and mobile terminal
CN105814522B (en) Device and method for displaying user interface of virtual input device based on motion recognition
JP2022008470A (en) Avatar creating user interface
CN109219796A (en) Digital touch on real-time video
JP2019016354A (en) Method and device for inputting expression icons
CN110460799A (en) Intention camera
CN108062760B (en) Video editing method and device and intelligent mobile terminal
CN109064387A (en) Image special effect generation method, device and electronic equipment
WO2020078319A1 (en) Gesture-based manipulation method and terminal device
CN104461348B (en) Information choosing method and device
CN112672061B (en) Video shooting method and device, electronic equipment and medium
CN108605165A (en) The method and electronic equipment of video thumbnails are generated in the electronic device
EP3340077B1 (en) Method and apparatus for inputting expression information
CN110196646A (en) A kind of data inputting method and mobile terminal
WO2023061414A1 (en) File generation method and apparatus, and electronic device
CN106951090A (en) Image processing method and device
CN104504083A (en) Image confirming method and device based on image searching
JP6697043B2 (en) Animation image generation method based on key input and user terminal performing the method
KR20140010525A (en) Emoticon service system and emoticon service providing method thereof
CN110377220A (en) A kind of instruction response method, device, storage medium and electronic equipment
CN106791398A (en) A kind of image processing method and terminal

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant