CN104333688A - Equipment and method for generating emoticon based on shot image - Google Patents

Equipment and method for generating emoticon based on shot image Download PDF

Info

Publication number
CN104333688A
CN104333688A CN201310645748.2A CN201310645748A CN104333688A CN 104333688 A CN104333688 A CN 104333688A CN 201310645748 A CN201310645748 A CN 201310645748A CN 104333688 A CN104333688 A CN 104333688A
Authority
CN
China
Prior art keywords
emoticon
user
expression
message
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310645748.2A
Other languages
Chinese (zh)
Other versions
CN104333688B (en
Inventor
张柏卉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Guangzhou Mobile R&D Center
Samsung Electronics Co Ltd
Original Assignee
Samsung Guangzhou Mobile R&D Center
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Guangzhou Mobile R&D Center, Samsung Electronics Co Ltd filed Critical Samsung Guangzhou Mobile R&D Center
Priority to CN201310645748.2A priority Critical patent/CN104333688B/en
Publication of CN104333688A publication Critical patent/CN104333688A/en
Application granted granted Critical
Publication of CN104333688B publication Critical patent/CN104333688B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)
  • Studio Devices (AREA)

Abstract

The invention provides equipment and a method for generating an emoticon based on a shot image. The equipment comprises an image acquisition unit for acquiring an image shot by a shooting device, a preview unit for generating an expression effect picture for previewing based on the acquired image and displaying the expression effect picture to a user, and an emoticon generating unit for generating an emoticon for outputting based on the expression effect picture and adding the generated emoticon into a message input by a user.

Description

Based on equipment and the method for the Computer image genration expression symbol of shooting
Technical field
The application relates to emoticon input technology, particularly relate to a kind of during user inputs message based on shooting image generate in real time and input equipment and the method for emoticon.
Background technology
In the prior art, when inserting emoticon in the message (such as, the message inputted in the interface of chat software or social media) that user wants in input, this user can only choose the emoticon wishing to insert from predetermined emoticon storehouse.Here, emoticon storehouse to be preset in application software or by the emoticon set of web download, emoticon is wherein the static or motion graphics that software metric tools person or emoticon producer design.
Although there has been special software to be used for making personalized emoticon at present, these emoticon have needed to make in advance, then copy in the message of input with the form of picture, still have the copy and paste process of picture in essence.In addition, although some chat software supports the picture catching in video calling, this is only limitted to photographic images to obtain corresponding picture file, and these pictures cannot be edited together with word, and transmission gets up will expend larger data traffic, also needs the longer time.
Summary of the invention
The object of exemplary embodiment of the present is that providing a kind of based on the Computer image genration expression symbol of shooting during user inputs message, and can input equipment and the method for described emoticon.
According to an aspect of the present invention, a kind of equipment accorded with for the Computer image genration expression when user inputs message based on shooting being provided, comprising: image acquisition unit, for obtaining the image taken by filming apparatus; Preview unit, for generating the expression effect figure for preview based on the image obtained, and shows expression design sketch to user; Emoticon generation unit, for generating the emoticon for exporting based on expression effect figure, and adds to the emoticon of generation in the message of user's input.
Described equipment can also comprise: filming apparatus, for photographic images.
In the apparatus, described filming apparatus can be opened according to the expression shooting instruction of user to start photographic images.
In the apparatus, the expression shooting instruction of described user can comprise in following item at least one: double-click for inputting optional position in the input frame of message, clicking shooting menu item in described input frame or shooting push button with touch manner or by mouse, carrying out slide in the region of described input frame, with voice mode input expression shooting order with touch manner or by mouse.
In the apparatus, described filming apparatus can be opened to start photographic images when user enters the interface for inputting message or when user starts to input message.
In the apparatus, described filming apparatus can comprise front-facing camera or post-positioned pick-up head, and image acquisition unit can obtain the image taken by front-facing camera or post-positioned pick-up head; Or described filming apparatus can comprise front-facing camera and post-positioned pick-up head, and image acquisition unit can obtain the image after the Images uniting taken separately by front-facing camera and post-positioned pick-up head.
In the apparatus, preview unit generates by the image of acquisition being embedded predetermined expression frame the expression effect figure being used for preview, and expression effect figure is presented at the presumptive area on screen.
In the apparatus, described presumptive area can be arranged in the input frame for inputting message, or is arranged in the preview window arranged separately independent of described input frame.
In the apparatus, described predetermined expression frame can be open circles.
In the apparatus, preview unit generates by the image of acquisition being embedded predetermined expression frame the expression effect figure being used for preview, expression effect figure is presented at the position at cursor place to replace the display of cursor.
In the apparatus, preview unit generates by the image of acquisition being embedded predetermined expression frame the expression effect figure being used for preview, is additionally presented at around cursor by expression effect figure.
In the apparatus, when cursor is positioned at the end inputting message, expression effect figure can be presented at after cursor by preview unit, when cursor is not positioned at the end having inputted message, expression effect figure directly can be presented at the next line of cursor or indicate according to the preview of user next line expression effect figure being presented at cursor by preview unit.
In the apparatus, emoticon generation unit can obtain the expression effect figure that user confirms, generates the emoticon for exporting based on the expression effect figure obtained, and is added to by the emoticon of generation in the message of user's input.
In the apparatus, emoticon generation unit can obtain the expression effect figure that user is confirmed by least one in following operation: click expression effect figure with touch manner or by mouse; Click the confirmation menu item or ACK button that are arranged in screen; Press acknowledgement key or side switch; With the order of voice mode input validation; With touch manner or the optional position clicked by mouse in screen.
In the apparatus, after preview unit shows expression design sketch to user, when user is with touch manner or when being clicked in screen optional position by mouse, expression effect figure now can carry out changing to generate the emoticon being used for exporting by emoticon generation unit, and is added to by the emoticon of generation in the message of user's input.
In the apparatus, after the emoticon generated is added in the message of user's input, or when emoticon instruction is abandoned in user's input, filming apparatus can be closed, and cursor can be re-displayed.
Described equipment can also comprise: emoticon memory cell, for storing at least one the standard emoticon generated based on the image taken in advance.
In the apparatus, expression effect figure and at least one standard emoticon described can compare by emoticon generation unit, when similarity between the one or more standard emoticon in emoticon generation unit determination expression effect figure and at least one standard emoticon described exceedes threshold value, the most similar standard emoticon adds in the message of user's input as the emoticon being used for exporting by emoticon generation unit.
In the apparatus, the emoticon of generation can be added to standard emoticon according to the expression interpolation instruction of user by emoticon generation unit.
In the apparatus, the image of acquisition can refer to static state or the dynamic expression image of user.
Described equipment can also comprise: Tip element, for analyzing the message of user's input, and points out user to make corresponding expression according to the result analyzed.
In the apparatus, filming apparatus can take the expression of user according to expression method for tracing.
According to a further aspect in the invention, a kind of method accorded with for the Computer image genration expression when user inputs message based on shooting is provided, comprises: the image obtaining shooting; Generate the expression effect figure for preview based on the image obtained, and show expression design sketch to user; Generate the emoticon for exporting based on expression effect figure, and the emoticon of generation is added in the message of user's input.
In the process, before the image obtaining shooting, can also comprise: photographic images.
In the process, the step of photographic images can comprise: the expression shooting instruction according to user opens filming apparatus to start photographic images.
In the process, the expression shooting instruction of described user can comprise in following item at least one: double-click for inputting optional position in the input frame of message, clicking shooting menu item in described input frame or shooting push button with touch manner or by mouse, carrying out slide in the region of described input frame, with voice mode input expression shooting order with touch manner or by mouse.
In the process, the step of photographic images can comprise: when user enters the interface for inputting message or when user starts to input message, open filming apparatus to start photographic images.
In the process, the step of photographic images can comprise: the front-facing camera comprised by described filming apparatus or post-positioned pick-up head carry out photographic images, and the step obtaining the image of shooting comprises: obtain the image taken by front-facing camera or post-positioned pick-up head; Or, the step of photographic images comprises: the front-facing camera comprised by described filming apparatus and post-positioned pick-up head carry out photographic images, further, the step obtaining the image of shooting comprises: obtain the image after the Images uniting taken separately by front-facing camera and post-positioned pick-up head.
In the process, generate by the image of acquisition being embedded predetermined expression frame the expression effect figure being used for preview, and expression effect figure is presented at the presumptive area on screen.
In the process, described presumptive area can be arranged in the input frame for inputting message, or is arranged in the preview window arranged separately independent of described input frame.
In the process, described predetermined expression frame can be open circles.
In the process, generating by the image of acquisition being embedded predetermined expression frame the expression effect figure being used for preview, expression effect figure being presented at the position at cursor place to replace the display of cursor.
In the process, generate by the image of acquisition being embedded predetermined expression frame the expression effect figure being used for preview, expression effect figure is additionally presented at around cursor.
In the process, when cursor is positioned at the end inputting message, can expression effect figure be presented at after cursor, when cursor is not positioned at the end having inputted message, directly expression effect figure can be presented at the next line of cursor or indicates according to the preview of user next line expression effect figure being presented at cursor.
In the process, the step generating the emoticon for exporting can comprise: obtain the expression effect figure that user confirms, generates the emoticon for exporting based on the expression effect figure obtained.
In the process, in the step obtaining the expression effect figure that user confirms, the expression effect figure that user is confirmed by least one in following operation can be obtained: click expression effect figure with touch manner or by mouse; Click the confirmation menu item or ACK button that are arranged in screen; Press acknowledgement key or side switch; With the order of voice mode input validation; With touch manner or the optional position clicked by mouse in screen.
In the process, after show expression design sketch to user, when user is with touch manner or when being clicked in screen optional position by mouse, expression effect figure now can be carried out change to generate the emoticon being used for exporting, and the emoticon of generation be added in the message of user's input.
Described method can also comprise: after the emoticon generated is added in the message of user's input, or, when emoticon instruction is abandoned in user's input, close filming apparatus, and display highlighting again.
Described method can also comprise: store at least one the standard emoticon generated based on the image taken in advance.
In the process, the emoticon for exporting is generated based on expression effect figure, and the step of being added to by the emoticon of generation in the message of user's input can comprise: expression effect figure and at least one standard emoticon described are compared, when determining that the similarity between the one or more standard emoticon in expression effect figure and at least one standard emoticon described exceedes threshold value, the most similar standard emoticon is added in the message of user's input as the emoticon being used for exporting.
Described method can also comprise: add instruction according to the expression of user and the emoticon of generation is added to standard emoticon.
In the process, the image of acquisition can refer to static state or the dynamic expression image of user.
Described method can also comprise: the message analyzing user's input, and points out user to make corresponding expression according to the result analyzed.
In the process, the step of photographic images can comprise: the expression of taking user according to expression method for tracing.
In equipment according to an exemplary embodiment of the present invention and method, can input in the process of message user, insert the emoticon generated based on photographic images in real time, so not only enrich the content of emoticon, and making the image taken can layout together with the word in message after being converted to emoticon, transmission aspect also saves flow and time.
Accompanying drawing explanation
By below in conjunction with exemplarily illustrating the description that the accompanying drawing of embodiment carries out, above-mentioned and other object of exemplary embodiment of the present and feature will become apparent, wherein:
Fig. 1 illustrates the block diagram of the equipment accorded with for the Computer image genration expression when user inputs message based on shooting according to an exemplary embodiment of the present invention;
Fig. 2 illustrates the example of the composograph obtained by image acquisition unit according to an exemplary embodiment of the present invention;
Fig. 3 illustrates the example according to an exemplary embodiment of the present invention image of shooting being embedded predetermined expression frame;
Fig. 4 illustrates the flow chart of the method accorded with for the Computer image genration expression when user inputs message based on shooting according to an exemplary embodiment of the present invention;
Fig. 5 illustrates and generates equipment to generate the flow chart of the process of emoticon by emoticon according to an exemplary embodiment of the present invention;
Fig. 6 illustrates the flow chart being generated the process of emoticon by emoticon generation equipment according to another exemplary embodiment of the present invention;
Fig. 7 illustrates the flow chart being generated the process of emoticon by emoticon generation equipment according to another exemplary embodiment of the present invention;
Fig. 8 illustrates according to an exemplary embodiment of the present invention for the example of the expression effect figure of preview;
Fig. 9 illustrates the example of the expression effect figure for preview according to another exemplary embodiment of the present invention;
Figure 10 illustrates the example of the expression effect figure for preview according to another exemplary embodiment of the present invention;
Figure 11 illustrates the example of the expression effect figure for preview according to another exemplary embodiment of the present invention;
Figure 12 illustrates the example of standard emoticon according to an exemplary embodiment of the present invention.
Embodiment
Below, describe exemplary embodiment of the present invention in detail with reference to the accompanying drawings, in the accompanying drawings, identical label refers to identical parts all the time.
Fig. 1 illustrates the block diagram of the equipment accorded with for the Computer image genration expression when user inputs message based on shooting according to an exemplary embodiment of the present invention.Exemplarily, during user inputs message in the various electronic products such as such as personal computer, smart mobile phone, panel computer, the emoticon shown in Fig. 1 generates equipment and can be used for generating emoticon based on the image of shooting.
As shown in Figure 1, emoticon generates equipment and comprises: image acquisition unit 10, for obtaining the image taken by filming apparatus 5; Preview unit 20, for generating the expression effect figure for preview based on the image obtained, and shows expression design sketch to user; Emoticon generation unit 30, for generating the emoticon for exporting based on expression effect figure, and adds to the emoticon of generation in the message of user's input.Here, described filming apparatus 5 can be included in emoticon generation equipment, also can be used as and is connected to the ancillary equipment that described emoticon generates equipment.Above-mentioned image acquisition unit 10, preview unit 20 and emoticon generation unit 30 can be realized by the common hardware such as digital signal processor, field programmable gate array processor, also realize by dedicated hardware processors such as special chips, also can be realized with software mode by computer program completely, such as, be implemented as and be arranged on the chat software in electronic product or the modules in social media class software.
Emoticon shown in Fig. 1 generates equipment and can input in the process of message user, and the image of shooting is converted to emoticon form, and inserts in the message of input, thus realizes and word together layout, and saves delivery flow rate and time.
Particularly, image acquisition unit 10 is for obtaining the image taken by filming apparatus 5.Here, as optimal way, the image of acquisition can be the expression of user in terms of content, can be still image or dynamic image in form.Open to start after photographic images puts at filming apparatus 5, image acquisition unit 10 can the automatically image taken by filming apparatus 5 of Real-time Obtaining, or obtains according to the control of user the image taken by filming apparatus 5.Such as, (namely filming apparatus 5 can take continuous moving image, video), correspondingly, image acquisition unit 10 automatically or according to the control of user can obtain the video photographed, or can intercept according to predetermined space the static single image photographed, or by being synthesized the dynamic image of such as gif form after continuously intercepting several still images in the given time.Filming apparatus 5 itself also can be set to take static single image according to predetermined space or take several still images in the given time continuously to synthesize dynamic image.
Filming apparatus 5 as optional feature can adopt expression tracking, to capture the expression of user exactly when taking the facial expression image of user.Here, as another optional feature, the emoticon shown in Fig. 1 generates equipment and can also comprise: Tip element (not shown), for analyzing the message of user's input, and points out user to make corresponding expression according to the result analyzed.Particularly, Tip element can analyze the meaning of input message according to semantic analysis technology, and makes the expression corresponding to analysis result the user that points out afterwards of filming apparatus 5 unlatching.Such as, when the message of user's input is " happy dead I ", the mood that Tip element can analyze input message is happy, and is made the expression of joy subsequently by form prompting users such as such as voice, such as, with voice output " say cheese ".Such as, or when the message of user's input is " I is hard hit ", the mood that Tip element can analyze input message is sad, and makes sad expression by form prompting users such as such as voice subsequently, " please make sad expression " with voice output.In addition, described Tip element also can not possess semantic analysis function, and is only about to carry out expression shooting to user speech prompting at the rear of filming apparatus 5 unlatching.
Filming apparatus 5 can comprise front-facing camera or post-positioned pick-up head, and in this case, image acquisition unit 10 will obtain the image taken by front-facing camera or the image taken by post-positioned pick-up head.In addition, filming apparatus 5 can comprise front-facing camera and post-positioned pick-up head, in this case, and the image after the Images uniting that acquisition is taken by front-facing camera and post-positioned pick-up head by image acquisition unit 10 separately.Fig. 2 illustrates the example of the composograph obtained by image acquisition unit 10 according to an exemplary embodiment of the present.As shown in Figure 2, the Ms's head portrait taken by front-facing camera is combined in single image via image acquisition unit 10 with the child's image taken by post-positioned pick-up head.
Exemplarily, filming apparatus 5 can be opened according to the expression shooting instruction of user to start photographic images.Particularly, input in the process of message user, when user wants to insert the emoticon based on photographic images, user can input expression shooting instruction, such as, user inputs expression shooting instruction by least one item in following operation: double-click the optional position for inputting in the input frame of message with touch manner or by mouse, shooting menu item in described input frame or shooting push button is clicked with touch manner or by mouse, slide is carried out (such as in the region of described input frame, by finger or felt pen etc.), with voice mode input expression shooting order.In this case, image acquisition unit 10 can the automatically image taken by filming apparatus 5 of Real-time Obtaining.After the emoticon based on photographic images is generated and adds in the message of input, or when emoticon instruction is abandoned in user's input, filming apparatus 5 can be closed.
As another example, filming apparatus 5 can be opened to start photographic images when user enters the interface for inputting message or when user starts to input message.In this case, filming apparatus 5 can (such as, during use chat software) continue to open during user inputs message.Now, image acquisition unit 10 can the automatically image taken by filming apparatus 5 of Real-time Obtaining, or obtains according to the control of user the image taken by filming apparatus 5.
In addition, preview unit 20 for generating the expression effect figure for preview based on the image obtained, and shows expression design sketch to user.Such as, preview unit 20 generates by the image of acquisition being embedded predetermined expression frame the expression effect figure being used for preview, and expression effect figure is presented at the presumptive area on screen.Exemplarily, described predetermined expression frame can be open circles, However, it should be understood that open circles is not for limiting the scope of the invention.Fig. 3 illustrates the example according to an exemplary embodiment of the present image of shooting being embedded predetermined expression frame.As shown in Figure 3, various difform expression frame can be adopted to generate expression effect figure.Here, the image that the specification (e.g., size, length, width etc.) of frame of expressing one's feelings can be used as acquisition is converted to the constraints of emoticon.In addition, described presumptive area can be arranged in the input frame for inputting message, such as, around the position at cursor place or cursor, or, described presumptive area can be arranged in the preview window arranged separately independent of the input frame for inputting message, and as optimal way, the size of the preview window and position can be arranged by user and adjust.
In addition, the emoticon of generation for generating the emoticon for exporting based on expression effect figure, and adds in the message of user's input by emoticon generation unit 30.
Exemplarily, emoticon generation unit 30 can obtain the expression effect figure that user confirms, generate the emoticon for exporting based on the expression effect figure obtained, and the emoticon of generation is added in the message (such as, the position at the current place of cursor) of user's input.Here, the expression effect figure that user confirms is converted to the form (meeting predetermined length and width) of emoticon by emoticon generation unit 30, thus the emoticon after conversion can insert to realize layout together with word in the message of user's input, and need less data traffic and time compared with transmission picture itself.
Such as, emoticon generation unit 30 can obtain the expression effect figure that user is confirmed by least one in following operation: click expression effect figure with touch manner or by mouse; Click the confirmation menu item or ACK button that are arranged in screen; Press acknowledgement key or side switch; With the order of voice mode input validation; With touch manner or the optional position clicked by mouse in screen.By arranging suitable confirmation operation in varied situations, user while the expression that is taken, can confirm the expression preview effect of the final emoticon institute foundation generated easily.
As another example, when filming apparatus 5 is opened according to the expression shooting instruction of user to start photographic images, emoticon generation unit 30 can not need to receive its expression effect figure confirmed from user, but directly picks out the standard emoticon as exporting emoticon based on the degree of approximation between expression effect figure and the standard emoticon prestored.
Particularly, in this case, the emoticon shown in Fig. 1 generates equipment also can comprise emoticon memory cell (not shown), for storing at least one the standard emoticon generated based on the image taken in advance.Exemplarily, described standard emoticon can be the expression etc. that typical case expresses one's feelings, user is favorite representing different mood, and can constantly update, namely, the emoticon of follow-up generation is added in emoticon memory cell as standard expression, or upgrades the original standard expression in emoticon memory cell.Such as, by processing (extract the facial expression part in image, perform the process such as convergent-divergent to the part extracted) to the image of shooting to generate the standard emoticon meeting emoticon form, these standard emoticon can be stored in special standard emoticon storehouse, also can be stored in the emoticon storehouse of acquiescence.Figure 12 illustrates the example of the standard emoticon of generation being added to existing acquiescence emoticon storehouse.As can be seen from Figure 12, latter two emoticon is the standard emoticon newly added by self-defined mode.
Correspondingly, at least one standard emoticon in the expression effect figure of preview and emoticon memory cell can compare by emoticon generation unit 30, determine that when emoticon generation unit 30 similarity between the one or more standard emoticon in expression effect figure and at least one standard emoticon described exceedes threshold value (such as, similarity is more than 80%) time, the most similar standard emoticon can add in the message of user's input as the emoticon being used for exporting by emoticon generation unit 30.Here, emoticon generation unit 30 can compare the similitude of expressing one's feelings between design sketch and standard emoticon based on textural characteristics, color character or brightness etc., and obtains the similarity numerical value reflecting degree of similarity.
By the way, do not need user to carry out confirmation operation and can generate corresponding emoticon, not only reflect the concrete expression photographed, and accelerate the speed generating emoticon, and simplify user operation.
Below, the method be used for according to an exemplary embodiment of the present invention when user inputs message based on the Computer image genration expression symbol of shooting composition graphs 4 is described to Figure 11.Described method can have been come by emoticon generation equipment as shown in Figure 1, also realizes by computer program.Such as, described method is by being arranged on the execution that should be used for for inputting message in electronic product.
Fig. 4 illustrates the flow chart of the method accorded with for the Computer image genration expression when user inputs message based on shooting according to an exemplary embodiment of the present invention.Exemplarily, the method according to Fig. 4, during user inputs message in the various electronic products such as such as personal computer, smart mobile phone, panel computer, can generate emoticon based on the image of shooting.
With reference to Fig. 4, in step S10, obtain the image of shooting.Here, as optimal way, the image of acquisition can be the expression of user in terms of content, can be still image or dynamic image in form.Particularly, open to start after photographic images puts at filming apparatus, can the automatically image taken by filming apparatus of Real-time Obtaining, or obtain according to the control of user the image taken by filming apparatus.Such as, (namely filming apparatus can take continuous moving image, video), correspondingly, in step S10, automatically or according to the control of user can obtain the video photographed, or can intercept according to predetermined space the static single image photographed, or by being synthesized the dynamic image of such as gif form after continuously intercepting several still images in the given time.In addition, also filming apparatus itself can be set to take static single image according to predetermined space or take several still images in the given time continuously to synthesize dynamic image.
As additional step, before step S10, the method described in Fig. 4 can also comprise: photographic images.Here, the filming apparatus of built-in unit or ancillary equipment can be used as to carry out photographic images (such as, the static state of user or dynamic expression image).Exemplarily, when taking the facial expression image of user, expression tracking can be adopted, to capture the expression of user exactly.
Here, as another additional step, the emoticon generation method shown in Fig. 4 can also comprise: the message analyzing user's input, and points out user to make corresponding expression according to the result analyzed.Particularly, the meaning of input message can be analyzed according to semantic analysis technology, and make the expression corresponding to analysis result the user that points out afterwards of filming apparatus unlatching.Such as, when the message of user's input is " happy dead I ", the mood that can analyze input message is happy, and is made the expression of joy subsequently by form prompting users such as such as voice, such as, with voice output " say cheese ".Such as, or when the message of user's input is " I is hard hit ", the mood that can analyze input message is sad, and makes sad expression by form prompting users such as such as voice subsequently, " please make sad expression " with voice output.In addition, described prompting step also can not comprise semantic analysis process, and is only about to carry out expression shooting to user speech prompting at the rear of filming apparatus unlatching.
According to exemplary embodiment of the present invention, filming apparatus can comprise front-facing camera or post-positioned pick-up head, in this case, in step S10, will obtain the image taken by front-facing camera or the image taken by post-positioned pick-up head.In addition, filming apparatus can comprise front-facing camera and post-positioned pick-up head, in this case, in step S10, and the image after the Images uniting that acquisition is taken separately by front-facing camera and post-positioned pick-up head.
Exemplarily, in step S10, filming apparatus can be opened to start photographic images according to the expression shooting instruction of user.Particularly, input in the process of message user, when user wants to insert the emoticon based on photographic images, user can input expression shooting instruction, such as, user inputs expression shooting instruction by least one item in following operation: double-click for inputting optional position in the input frame of message, clicking shooting menu item in described input frame or shooting push button with touch manner or by mouse, carrying out slide in the region of described input frame, with voice mode input expression shooting order with touch manner or by mouse.In this case, can the automatically image taken by filming apparatus of Real-time Obtaining.Correspondingly, after the emoticon based on photographic images is generated and adds in the message of input, or, when emoticon instruction is abandoned in user's input, can filming apparatus be closed.
As another example, in step S10, filming apparatus can be opened to start photographic images when user enters the interface for inputting message or when user starts to input message.In this case, filming apparatus can (such as, during use chat software) continue to open during user inputs message.Now, can the automatically image taken by filming apparatus of Real-time Obtaining, or obtain according to the control of user the image taken by filming apparatus.
Next, in step S20, generate the expression effect figure for preview based on the image obtained, and show expression design sketch to user.Such as, generate by the image of acquisition being embedded predetermined expression frame the expression effect figure being used for preview, and expression effect figure is presented at the presumptive area on screen.Exemplarily, described predetermined expression frame can be open circles, However, it should be understood that open circles is not for limiting the scope of the invention, and can adopt various difform expression frame to generate expression effect figure.Here, the image that the specification (e.g., size, length, width etc.) of frame of expressing one's feelings can be used as acquisition is converted to the constraints of emoticon.In addition, described presumptive area can be arranged in the input frame for inputting message, such as, around the position at cursor place or cursor, or, described presumptive area can be arranged in the preview window arranged separately independent of the input frame for inputting message, and as optimal way, the size of the preview window and position can be arranged by user and adjust.
Then, in step S30, generate the emoticon for exporting based on expression effect figure, and the emoticon of generation is added in the message of user's input.
Exemplarily, in step S30, the expression effect figure that user confirms can being obtained, generating the emoticon for exporting based on the expression effect figure obtained, and the emoticon of generation be added in the message (such as, the position at the current place of cursor) of user's input.Here, the expression effect figure that user confirms is converted to the form (meeting predetermined length and width) of emoticon, thus the emoticon after conversion can insert to realize layout together with word in the message of user's input, and need less data traffic and time compared with transmission picture itself.
Such as, the expression effect figure that user is confirmed by least one in following operation can be obtained: click expression effect figure with touch manner or by mouse; Click the confirmation menu item or ACK button that are arranged in screen; Press acknowledgement key or side switch; With the order of voice mode input validation; With touch manner or the optional position clicked by mouse in screen.By arranging suitable confirmation operation in varied situations, user while the expression that is taken, can confirm the expression preview effect of the final emoticon institute foundation generated easily.
As another example, when filming apparatus is opened according to the expression shooting instruction of user to start photographic images, can not need to receive its expression effect figure confirmed from user, but directly pick out the standard emoticon as exporting emoticon based on the degree of approximation between expression effect figure and the standard emoticon prestored.
Particularly, in this case, the emoticon generation method shown in Fig. 4 also can comprise the following steps: store at least one the standard emoticon generated based on the image taken in advance.Exemplarily, described standard emoticon can be the expression etc. that typical case expresses one's feelings, user is favorite representing different mood, and can constantly update, that is, stored as standard expression by the emoticon of follow-up generation, or upgrades original standard expression.
Correspondingly, in step S30, at least one standard emoticon in the expression effect figure of preview and emoticon memory cell can be compared, when determining that the similarity between the one or more standard emoticon in expression effect figure and at least one standard emoticon described exceedes threshold value (such as, similarity is more than 80%) time, the most similar standard emoticon can be added in the message of user's input as the emoticon being used for exporting.Here, the similitude of expressing one's feelings between design sketch and standard emoticon can be compared based on textural characteristics, color character or brightness etc., and obtain the similarity numerical value reflecting degree of similarity.
By the way, do not need user to carry out confirmation operation and can generate corresponding emoticon, not only reflect the concrete expression photographed, and accelerate the speed generating emoticon, and simplify user operation.
In the emoticon generation method shown in Fig. 4, can input in the process of message user, the image of shooting is converted to emoticon form, and insert in the message of input, thus realize and word together layout, and save delivery flow rate and time.
Fig. 5 illustrates and generates equipment to generate the flow chart of the process of emoticon by emoticon according to an exemplary embodiment of the present invention.
With reference to Fig. 5, in step S101, during user inputs message, the expression being received user by filming apparatus 5 takes instruction.Exemplarily, the expression shooting instruction of described user can comprise in following item at least one: double-click for inputting optional position in the input frame of message, clicking shooting menu item in described input frame or shooting push button with touch manner or by mouse, carrying out slide in the region of described input frame, with voice mode input expression shooting order with touch manner or by mouse.
After filming apparatus 5 receives the expression shooting instruction of user, in step S102, filming apparatus 5 is opened to start photographic images.Exemplarily, filming apparatus 5 can take the facial expression image of user.
Then, in step S103, obtained the image taken by filming apparatus 5 by image acquisition unit 10, here, the image of acquisition can be the dynamic image after user's video, single width still image or multiple image synthesis of expressing one's feelings.
In step S104, generate expression effect figure for preview by preview unit 20 based on the image obtained, such as, preview unit 20 generates expression effect figure for preview by the image obtained being embedded predetermined expression frame (such as, open circles).In step S105, replace the display of cursor in input message by the preview unit 20 expression effect figure generated.Exemplarily, open circles here can be converted by the cursor of current (that is, when user input feelings take instruction), namely, current cursor becomes open circles, and the image obtained is embedded in described open circles by preview unit 20, thus the display of expression effect figure instead of the display of cursor.As shown in Figure 8, Fig. 8 respectively illustrates the expression effect figure that cursor generates under both of these case in the middle of message end and message to the example of expression effect figure.
Next, in step S105, generate emoticon for exporting by emoticon generation unit 30 based on expression effect figure, and the emoticon of generation is added in the message (that is, the user input feelings shooting instruction time is marked on the position in input message) of user's input.Here, exemplarily, emoticon generation unit 30 can obtain the expression effect figure that user confirms, generates the emoticon for exporting based on the expression effect figure obtained.Particularly, after preview unit 20 shows expression design sketch to user, when user is with touch manner or when being clicked in screen optional position by mouse, expression effect figure is now converted to the emoticon for exporting by emoticon generation unit 30, and the emoticon of generation is added in the message of user's input, that is, cursor is converted to the position at the front place of open circles.As another example, at least one standard emoticon of expression effect figure and storage can compare by emoticon generation unit 30, when the similarity determining between the one or more standard emoticon in expression effect figure and at least one standard emoticon described when emoticon generation unit 30 exceedes threshold value, the most similar standard emoticon adds in the message of user's input as the emoticon being used for exporting by emoticon generation unit 30.
After emoticon is generated and adds message to, in step S107, filming apparatus 5 is closed, and then, in step S108, cursor is re-displayed position during user input feelings shooting instruction.
In the process shown in Fig. 5, only have when user wishes input emoticon, filming apparatus 5 just can be opened, and after emoticon generates and adds message to, filming apparatus 5 is closed.In addition, in the process shown in Fig. 5, after filming apparatus 5 is opened, when emoticon instruction is abandoned in user's input, filming apparatus 5 is closed all immediately, and the process generating emoticon also can correspondingly stop, and cursor recovers display.
Fig. 6 illustrates the flow chart being generated the process of emoticon by emoticon generation equipment according to another exemplary embodiment of the present invention.
With reference to Fig. 6, in step S111, interface or user that user enters for inputting message start to input message.
Next, in step S112, filming apparatus 5 is opened to start photographic images.Exemplarily, filming apparatus 5 can take the facial expression image of user.
Then, in step S113, obtained the image taken by filming apparatus 5 by image acquisition unit 10, here, the image of acquisition can be the dynamic image after user's video, single width still image or multiple image synthesis of expressing one's feelings.
In step S114, generate expression effect figure for preview by preview unit 20 based on the image obtained, such as, preview unit 20 generates expression effect figure for preview by the image obtained being embedded predetermined expression frame (such as, open circles).
In step S115, determine whether cursor is in the end of input message by preview unit 20.If determine that cursor is in the end of input message in step S115, then in step S116, be presented at after cursor by the expression effect figure of preview unit 20 by generation.If determine that cursor is not in the end of input message in step S115, then in step S117, by preview unit 20, the expression effect figure of generation is presented at the next line of cursor.As shown in Figure 9, Fig. 9 respectively illustrates the expression effect figure that cursor generates under both of these case in the middle of message end and message to the example of expression effect figure.
Next, in step S118, generate emoticon for exporting by emoticon generation unit 30 based on expression effect figure, and the emoticon of generation is added in the message (that is, the current position in input message of cursor) of user's input.Here, exemplarily, emoticon generation unit 30 can obtain the expression effect figure that user confirms, generates the emoticon for exporting based on the expression effect figure obtained.Particularly, after preview unit 20 shows expression design sketch to user, when user is with touch manner or when being clicked in screen optional position by mouse, expression effect figure is now converted to the emoticon for exporting by emoticon generation unit 30, and the emoticon of generation is added in the message of user's input, that is, the position at the current place of cursor.
In the process shown in Fig. 6, as long as user enters message inputting interface or starts to input message, filming apparatus 5 will be opened and continue shooting.When cursor is not in the end of input message, then expression preview graph is presented at the position of cursor next line.
But, in fact, if cursor is moved to the position in the middle of message by user from end, be many times not intended to insert emoticon, but in order to revise text.In this case, expression preview graph can not be shown.
For this reason, Fig. 7 illustrates the flow chart being generated the process of emoticon by emoticon generation equipment according to another exemplary embodiment of the present invention.
Fig. 7 illustrates the flow chart being generated the process of emoticon by emoticon generation equipment according to another exemplary embodiment of the present invention.
With reference to Fig. 7, in step S121, interface or user that user enters for inputting message start to input message.
Next, in step S122, filming apparatus 5 is opened to start photographic images.Exemplarily, filming apparatus 5 can take the facial expression image of user.
Then, in step S123, obtained the image taken by filming apparatus 5 by image acquisition unit 10, here, the image of acquisition can be the dynamic image after user's video, single width still image or multiple image synthesis of expressing one's feelings.
In step S124, generate expression effect figure for preview by preview unit 20 based on the image obtained, such as, preview unit 20 generates expression effect figure for preview by the image obtained being embedded predetermined expression frame (such as, open circles).
In step S125, determine whether cursor is in the end of input message by preview unit 20.If determine that cursor is in the end of input message in step S125, then in step S126, be presented at after cursor by the expression effect figure of preview unit 20 by generation.If determine that cursor is not in the end of input message in step S125, then in step S127, determine whether user have input preview and indicate by preview unit 20.When in step S127, preview unit 20 determines that user have input preview instruction, in step S128, by preview unit 20, the expression effect figure of generation is presented at the next line of cursor.The example of expression effect figure as shown in Figure 10, Figure 10 respectively illustrate cursor generate when message end the example of expression effect figure and cursor in the message between and user does not input and browses instruction when do not generate the example of expression effect figure.
Next, in step S129, generate emoticon for exporting by emoticon generation unit 30 based on expression effect figure, and the emoticon of generation is added in the message (that is, the current position in input message of cursor) of user's input.Here, exemplarily, emoticon generation unit 30 can obtain the expression effect figure that user confirms, generates the emoticon for exporting based on the expression effect figure obtained.Particularly, after preview unit 20 shows expression design sketch to user, when user is with touch manner or when being clicked in screen optional position by mouse, expression effect figure is now converted to the emoticon for exporting by emoticon generation unit 30, and the emoticon of generation is added in the message of user's input, that is, the position at the current place of cursor.
In the process shown in Fig. 7, as long as user enters message inputting interface or starts to input message, filming apparatus 5 will be opened and continue shooting.When cursor is not in the end of input message, then needs to receive browsing of user and indicate ability that expression preview graph is presented at the position of cursor next line.
Above-mentioned example is only used to explain that emoticon generates equipment and method according to an exemplary embodiment of the present invention, is not construed as limiting the invention.As described above, those skilled in the art can adopt various different indicative input mode, image confirming mode etc. to implement the present invention.Such as, except such as showing expression design sketch shown in Fig. 8 to Figure 10, also can not consider the position of cursor and expression effect figure is presented in the preview window arranged separately independent of message input frame.As shown in figure 11, expression effect figure is displayed in the preview window, and wherein, the size of the preview window and position can be arranged by user or be adjusted.In addition, after user confirms certain expression effect figure by the optional position clicked in the preview window, emoticon generation unit 30 can according to meet emoticon form parameter (as, size and shape etc.) change expression design sketch, thus the emoticon generated for exporting, and the emoticon of generation is added to the last position in the message of cursor.
Can find out referring to figs. 1 through Figure 12 description of this invention according to above-mentioned, generate in equipment and method at emoticon according to an exemplary embodiment of the present invention, captured in real-time image can be needed (such as according to user, the facial expression image of user), and described image is converted to emoticon, thus as a part layout and transmission together with word of user input content, this makes user not only can add personalized Dare expression in real time, also save time and the flow of transmission emoticon, the chat that improve user is experienced.In addition, being also provided with unique photographing process, preview process, confirming to process (as by clicking the preview design sketch that optional position confirms to take) and emoticon generating process etc., the experience of user can be enriched further, be convenient to the operation of user.
It may be noted that the needs according to implementing, each step described can be split as more multi-step, also the part operation of two or more step or step can be combined into new step, to realize object of the present invention in the application.
Above-mentioned can at hardware according to method of the present invention, realize in firmware, or be implemented as and can be stored in recording medium (such as CD ROM, RAM, floppy disk, hard disk or magneto optical disk) in software or computer code, or be implemented and will be stored in the computer code in local recording medium by the original storage of web download in remote logging medium or nonvolatile machine readable media, thus method described here can be stored in use all-purpose computer, such software process on the recording medium of application specific processor or able to programme or specialized hardware (such as ASIC or FPGA).Be appreciated that, computer, processor, microprocessor controller or programmable hardware comprise and can store or receive the memory module of software or computer code (such as, RAM, ROM, flash memory etc.), when described software or computer code by computer, processor or hardware access and perform time, realize processing method described here.In addition, when the code for realizing the process shown in this accessed by all-purpose computer, all-purpose computer is converted to the special-purpose computer for performing the process shown in this by the execution of code.
Although show and describe the present invention with reference to preferred embodiment, it should be appreciated by those skilled in the art that when not departing from the spirit and scope of the present invention be defined by the claims, various amendment and conversion can be carried out to these embodiments.

Claims (23)

1., for the equipment that the Computer image genration expression when user inputs message based on shooting accords with, comprising:
Image acquisition unit, for obtaining the image taken by filming apparatus;
Preview unit, for generating the expression effect figure for preview based on the image obtained, and shows expression design sketch to user;
Emoticon generation unit, for generating the emoticon for exporting based on expression effect figure, and adds to the emoticon of generation in the message of user's input.
2. equipment as claimed in claim 1, also comprises:
Filming apparatus, for photographic images.
3. equipment as claimed in claim 2, wherein, described filming apparatus is opened according to the expression shooting instruction of user to start photographic images.
4. equipment as claimed in claim 3, wherein, the expression shooting instruction of described user comprise in following item at least one: double-click for inputting optional position in the input frame of message, clicking shooting menu item in described input frame or shooting push button with touch manner or by mouse, carrying out slide in the region of described input frame, with voice mode input expression shooting order with touch manner or by mouse.
5. equipment as claimed in claim 2, wherein, described filming apparatus is opened to start photographic images when user enters the interface for inputting message or when user starts to input message.
6. equipment as claimed in claim 2, wherein, described filming apparatus comprises front-facing camera or post-positioned pick-up head, and image acquisition unit obtains the image taken by front-facing camera or post-positioned pick-up head; Or described filming apparatus comprises front-facing camera and post-positioned pick-up head, and image acquisition unit obtains the image after the Images uniting taken separately by front-facing camera and post-positioned pick-up head.
7. equipment as claimed in claim 1, wherein, preview unit generates by the image of acquisition being embedded predetermined expression frame the expression effect figure being used for preview, and expression effect figure is presented at the presumptive area on screen.
8. equipment as claimed in claim 7, wherein, described presumptive area is arranged in the input frame for inputting message, or is arranged in the preview window arranged separately independent of described input frame.
9. equipment as claimed in claim 7, wherein, described predetermined expression frame is open circles.
10. equipment as claimed in claim 3, wherein, preview unit generates by the image of acquisition being embedded predetermined expression frame the expression effect figure being used for preview, expression effect figure is presented at the position at cursor place to replace the display of cursor.
11. equipment as claimed in claim 5, wherein, preview unit generates by the image of acquisition being embedded predetermined expression frame the expression effect figure being used for preview, is additionally presented at around cursor by expression effect figure.
12. equipment as claimed in claim 11, wherein, when cursor is positioned at the end inputting message, expression effect figure is presented at after cursor by preview unit, when cursor is not positioned at the end having inputted message, expression effect figure is directly presented at the next line of cursor or indicates according to the preview of user next line expression effect figure being presented at cursor by preview unit.
13. equipment as claimed in claim 1, wherein, emoticon generation unit obtains the expression effect figure that user confirms, generates the emoticon for exporting based on the expression effect figure obtained, and is added to by the emoticon of generation in the message of user's input.
14. equipment as claimed in claim 13, wherein, emoticon generation unit obtains the expression effect figure that user is confirmed by least one in following operation: click expression effect figure with touch manner or by mouse; Click the confirmation menu item or ACK button that are arranged in screen; Press acknowledgement key or side switch; With the order of voice mode input validation; With touch manner or the optional position clicked by mouse in screen.
15. equipment as claimed in claim 10, wherein, after preview unit shows expression design sketch to user, when user is with touch manner or when being clicked in screen optional position by mouse, expression effect figure now carries out changing to generate the emoticon being used for exporting by emoticon generation unit, and is added to by the emoticon of generation in the message of user's input.
16. equipment as claimed in claim 15, wherein, after the emoticon generated is added in the message of user's input, or when emoticon instruction is abandoned in user's input, filming apparatus is closed, and cursor is re-displayed.
17. equipment as claimed in claim 3, also comprise: emoticon memory cell, for storing at least one the standard emoticon generated based on the image taken in advance.
18. equipment as claimed in claim 17, wherein, expression effect figure and at least one standard emoticon described compare by emoticon generation unit, when similarity between the one or more standard emoticon in emoticon generation unit determination expression effect figure and at least one standard emoticon described exceedes threshold value, the most similar standard emoticon adds in the message of user's input as the emoticon being used for exporting by emoticon generation unit.
19. equipment as claimed in claim 18, wherein, emoticon generation unit adds instruction according to the expression of user and the emoticon of generation is added to standard emoticon.
20. equipment as claimed in claim 2, wherein, the image of acquisition refers to static state or the dynamic expression image of user.
21. equipment as claimed in claim 20, also comprise: Tip element, for analyzing the message of user's input, and point out user to make corresponding expression according to the result analyzed.
22. equipment as claimed in claim 20, wherein, filming apparatus takes the expression of user according to expression method for tracing.
23. 1 kinds of methods accorded with for the Computer image genration expression when user inputs message based on shooting, comprising:
Obtain the image of shooting;
Generate the expression effect figure for preview based on the image obtained, and show expression design sketch to user;
Generate the emoticon for exporting based on expression effect figure, and the emoticon of generation is added in the message of user's input.
CN201310645748.2A 2013-12-03 2013-12-03 The device and method of image formation sheet feelings symbol based on shooting Active CN104333688B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310645748.2A CN104333688B (en) 2013-12-03 2013-12-03 The device and method of image formation sheet feelings symbol based on shooting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310645748.2A CN104333688B (en) 2013-12-03 2013-12-03 The device and method of image formation sheet feelings symbol based on shooting

Publications (2)

Publication Number Publication Date
CN104333688A true CN104333688A (en) 2015-02-04
CN104333688B CN104333688B (en) 2018-07-10

Family

ID=52408331

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310645748.2A Active CN104333688B (en) 2013-12-03 2013-12-03 The device and method of image formation sheet feelings symbol based on shooting

Country Status (1)

Country Link
CN (1) CN104333688B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104635930A (en) * 2015-02-09 2015-05-20 联想(北京)有限公司 Information processing method and electronic device
CN105897551A (en) * 2015-02-13 2016-08-24 国际商业机器公司 Point In Time Expression Of Emotion Data Gathered From A Chat Session
CN108320316A (en) * 2018-02-11 2018-07-24 秦皇岛中科鸿合信息科技有限公司 Personalized emoticons, which pack, makees system and method
CN108596114A (en) * 2018-04-27 2018-09-28 佛山市日日圣科技有限公司 A kind of expression generation method and device
CN109716264A (en) * 2016-07-19 2019-05-03 斯纳普公司 Show customized electronic information figure
CN111507143A (en) * 2019-01-31 2020-08-07 北京字节跳动网络技术有限公司 Expression image effect generation method and device and electronic equipment
EP3758364A4 (en) * 2018-09-27 2021-05-19 Tencent Technology (Shenzhen) Company Limited Dynamic emoticon-generating method, computer-readable storage medium and computer device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1606347A (en) * 2004-11-15 2005-04-13 北京中星微电子有限公司 A video communication method
US20070071288A1 (en) * 2005-09-29 2007-03-29 Quen-Zong Wu Facial features based human face recognition method
CN101179471A (en) * 2007-05-31 2008-05-14 腾讯科技(深圳)有限公司 Method and apparatus for implementing user personalized dynamic expression picture with characters
CN101686442A (en) * 2009-08-11 2010-03-31 深圳华为通信技术有限公司 Method and device for achieving user mood sharing by using wireless terminal
CN102193620A (en) * 2010-03-02 2011-09-21 三星电子(中国)研发中心 Input method based on facial expression recognition
US20130159919A1 (en) * 2011-12-19 2013-06-20 Gabriel Leydon Systems and Methods for Identifying and Suggesting Emoticons

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1606347A (en) * 2004-11-15 2005-04-13 北京中星微电子有限公司 A video communication method
US20070071288A1 (en) * 2005-09-29 2007-03-29 Quen-Zong Wu Facial features based human face recognition method
CN101179471A (en) * 2007-05-31 2008-05-14 腾讯科技(深圳)有限公司 Method and apparatus for implementing user personalized dynamic expression picture with characters
CN101686442A (en) * 2009-08-11 2010-03-31 深圳华为通信技术有限公司 Method and device for achieving user mood sharing by using wireless terminal
CN102193620A (en) * 2010-03-02 2011-09-21 三星电子(中国)研发中心 Input method based on facial expression recognition
US20130159919A1 (en) * 2011-12-19 2013-06-20 Gabriel Leydon Systems and Methods for Identifying and Suggesting Emoticons

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104635930A (en) * 2015-02-09 2015-05-20 联想(北京)有限公司 Information processing method and electronic device
US10904183B2 (en) 2015-02-13 2021-01-26 International Business Machines Corporation Point in time expression of emotion data gathered from a chat session
US10594638B2 (en) 2015-02-13 2020-03-17 International Business Machines Corporation Point in time expression of emotion data gathered from a chat session
CN105897551A (en) * 2015-02-13 2016-08-24 国际商业机器公司 Point In Time Expression Of Emotion Data Gathered From A Chat Session
CN105897551B (en) * 2015-02-13 2019-03-19 国际商业机器公司 For showing the method and system of the mood of the participant in electronic chat session
CN109716264A (en) * 2016-07-19 2019-05-03 斯纳普公司 Show customized electronic information figure
US11418470B2 (en) 2016-07-19 2022-08-16 Snap Inc. Displaying customized electronic messaging graphics
US11438288B2 (en) 2016-07-19 2022-09-06 Snap Inc. Displaying customized electronic messaging graphics
CN109716264B (en) * 2016-07-19 2022-11-01 斯纳普公司 Displaying custom electronic message graphics
US11509615B2 (en) 2016-07-19 2022-11-22 Snap Inc. Generating customized electronic messaging graphics
CN108320316B (en) * 2018-02-11 2022-03-04 秦皇岛中科鸿合信息科技有限公司 Personalized facial expression package manufacturing system and method
CN108320316A (en) * 2018-02-11 2018-07-24 秦皇岛中科鸿合信息科技有限公司 Personalized emoticons, which pack, makees system and method
CN108596114A (en) * 2018-04-27 2018-09-28 佛山市日日圣科技有限公司 A kind of expression generation method and device
EP3758364A4 (en) * 2018-09-27 2021-05-19 Tencent Technology (Shenzhen) Company Limited Dynamic emoticon-generating method, computer-readable storage medium and computer device
US11645804B2 (en) 2018-09-27 2023-05-09 Tencent Technology (Shenzhen) Company Limited Dynamic emoticon-generating method, computer-readable storage medium and computer device
US12094047B2 (en) 2018-09-27 2024-09-17 Tencent Technology (Shenzhen) Company Ltd Animated emoticon generation method, computer-readable storage medium, and computer device
CN111507143A (en) * 2019-01-31 2020-08-07 北京字节跳动网络技术有限公司 Expression image effect generation method and device and electronic equipment
US12020469B2 (en) 2019-01-31 2024-06-25 Beijing Bytedance Network Technology Co., Ltd. Method and device for generating image effect of facial expression, and electronic device

Also Published As

Publication number Publication date
CN104333688B (en) 2018-07-10

Similar Documents

Publication Publication Date Title
CN109120866B (en) Dynamic expression generation method and device, computer readable storage medium and computer equipment
TWI720062B (en) Voice input method, device and terminal equipment
CN104333688A (en) Equipment and method for generating emoticon based on shot image
KR102161230B1 (en) Method and apparatus for user interface for multimedia content search
CN107003720B (en) Scripted digital media message generation
JP2020515124A (en) Method and apparatus for processing multimedia resources
JP2020516994A (en) Text editing method, device and electronic device
CN112672061B (en) Video shooting method and device, electronic equipment and medium
KR102546016B1 (en) Systems and methods for providing personalized video
WO2023061414A1 (en) File generation method and apparatus, and electronic device
WO2023030270A1 (en) Audio/video processing method and apparatus and electronic device
US9973459B2 (en) Digital media message generation
WO2021120872A1 (en) Video processing method and apparatus, and terminal device
CN103294748A (en) Method for excerpting and editing Internet contents
WO2023030306A1 (en) Method and apparatus for video editing, and electronic device
WO2024153191A1 (en) Video generation method and apparatus, electronic device, and medium
CN112367487B (en) Video recording method and electronic equipment
CN114584704A (en) Shooting method and device and electronic equipment
CN114491087A (en) Text processing method and device, electronic equipment and storage medium
CN113778300A (en) Screen capturing method and device
CN113592983A (en) Image processing method and device and computer readable storage medium
CN117395462A (en) Method and device for generating media content, electronic equipment and readable storage medium
CN116543079A (en) Method and device for generating expression image, electronic equipment and readable storage medium
CN116643681A (en) Method, apparatus, device and storage medium for interaction
CN114581564A (en) Processing method and device and electronic equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant