WO2016165615A1 - 一种即时视频中的表情特效动画加载方法和电子设备 - Google Patents

一种即时视频中的表情特效动画加载方法和电子设备 Download PDF

Info

Publication number
WO2016165615A1
WO2016165615A1 PCT/CN2016/079116 CN2016079116W WO2016165615A1 WO 2016165615 A1 WO2016165615 A1 WO 2016165615A1 CN 2016079116 W CN2016079116 W CN 2016079116W WO 2016165615 A1 WO2016165615 A1 WO 2016165615A1
Authority
WO
WIPO (PCT)
Prior art keywords
animation
emoticon
loading
video frame
instant video
Prior art date
Application number
PCT/CN2016/079116
Other languages
English (en)
French (fr)
Inventor
武俊敏
Original Assignee
美国掌赢信息科技有限公司
武俊敏
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 美国掌赢信息科技有限公司, 武俊敏 filed Critical 美国掌赢信息科技有限公司
Publication of WO2016165615A1 publication Critical patent/WO2016165615A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working

Definitions

  • the present invention relates to the field of video, and in particular, to an expression special effect animation loading method and an electronic device in an instant video.
  • the embodiment of the present invention provides an animation method and an electronic device for loading an expressive effect in an instant video.
  • the technical solution is as follows:
  • an animation effect loading method in an instant video comprising:
  • the identifying a person in an instant video frame Facial expressions include:
  • the obtaining, according to the recognition result, the animation of the expression to be loaded includes:
  • the determining, by the embedding effect animation, the loading position in the instant video frame includes:
  • the method further includes:
  • the method further includes:
  • an electronic device comprising:
  • An identification module configured to identify a facial expression in an instant video frame, and generate a recognition result
  • An obtaining module configured to acquire an animation of the expression to be loaded according to the recognition result
  • a determining module configured to determine a loading position of the emoticon effect animation in an instant video frame
  • a sending module configured to send the emoticon effect animation and the loading location to other electronic devices.
  • a loading module configured to load the expression special effect animation according to the loading position
  • a display module configured to display the instant video frame after loading the emoticon animation.
  • the identifying module is specifically configured to:
  • the acquiring module is specifically configured to:
  • the acquiring module is further specifically configured to:
  • the device further includes a receiving module, configured to acquire a cancellation instruction input by the user;
  • the device further includes a deletion module, configured to delete the special effect animation indicated by the cancellation instruction;
  • the sending module is further configured to send the cancellation instruction to the other electronic device.
  • an electronic device including a video input module, a video output module, a sending module, a receiving module, a memory, and the video input module, the video output module, the sending module, and the receiving module.
  • a processor coupled to the memory, wherein the memory stores a set of program code, the processor for invoking program code stored in the memory, performing the following operations:
  • the program code stored in the memory by the processor is further used to control the video input module to receive an instant video frame.
  • the processor is configured to invoke program code stored in the memory, and perform the following operations:
  • the processor is configured to invoke program code stored in the memory, and perform the following operations:
  • the processor is configured to invoke program code stored in the memory, and perform the following operations:
  • the processor is configured to invoke program code stored in the memory, and perform the following operations:
  • the processor is configured to invoke the program code stored in the memory, and perform the following operations:
  • the control sending module sends the cancellation instruction to the other electronic device.
  • a method for displaying an expressive effect in an instant video comprising:
  • the loading position is determined in a current video frame
  • the facial effect animation is obtained by identifying a facial expression in an instant video frame.
  • the method further includes:
  • an electronic device comprising:
  • a receiving module configured to receive an animation effect and a loading position sent by another electronic device
  • a loading module configured to load the emoticon animation to the loading position
  • a display module configured to display the instant video frame after loading the emoticon animation
  • the loading position is determined in a current video frame
  • the facial effect animation is obtained by identifying a facial expression in an instant video frame.
  • the receiving module is further configured to receive a cancellation instruction sent by the other electronic device
  • the loading module is further configured to delete the special effect animation indicated by the cancellation instruction.
  • an electronic device including a video output module, a transmit/receive module, a memory, and a processor coupled to the video output module, the transmit/receive module, and the memory, wherein the memory A set of program code is stored, the processor is configured to call program code stored in the memory, and performs the following operations:
  • the loading position is determined in a current video frame
  • the facial effect animation is obtained by identifying a facial expression in an instant video frame.
  • the processor is configured to invoke program code stored in the memory, and perform the following operations:
  • Embodiments of the present invention provide an animation effect loading method and an electronic device in an instant video.
  • the method includes: recognizing a facial expression in an instant video frame, generating a recognition result; acquiring an expression special effect animation to be loaded according to the recognition result; determining a loading position of the expression special effect animation in the instant video frame; sending the expression special effect animation and loading position to the other Electronic equipment.
  • a recognition result is generated, and according to the recognition result, the acquired facial effect animation is loaded into a loading position in the instant video, and the real-time video is realized.
  • the position enables the emoticon effect animation to follow the change of the character and correspondingly change, thereby improving the user experience; in addition, by recognizing the facial expression in the instant video frame, generating the recognition result, and acquiring the emoticon effect animation to be loaded according to the recognition result, Auto-loading emoticon animations simplifies the steps and improves the user experience compared to manual loading.
  • FIG. 1 is a flowchart of a method for loading an expressive effect in an instant video according to an embodiment of the present invention
  • FIG. 2 is a flowchart of a method for loading an expressive effect in an instant video according to an embodiment of the present invention
  • FIG. 3 is a schematic diagram of a change of a real-time video interface according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a change of a real-time video interface according to an embodiment of the present invention.
  • FIG. 5 is a flowchart of a method for loading an expressive effect in an instant video according to an embodiment of the present invention
  • FIG. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
  • FIG. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
  • An embodiment of the present invention provides an animation effect loading method in an instant video.
  • the method is applied to an interactive system, where the interactive system includes at least two electronic devices, and the two electronic devices can be executed by running a program.
  • the real-time video communication wherein the electronic device may be a smart phone, a tablet computer, or other electronic device, and the specific electronic device is not limited in the embodiment of the present invention.
  • the electronic device includes at least a video input module and a video display module, the video input module may include a camera, and the video display module may include a display screen.
  • the at least two electronic devices can directly perform real-time video interaction, and can be connected through a wireless connection manner such as Bluetooth or WiFi, or can be connected through a connection device, the connection device includes a router, etc.; the at least two electronic devices can also Instant video interaction through the server, which can be the server of the application.
  • a wireless connection manner such as Bluetooth or WiFi
  • the connection device includes a router, etc.
  • the at least two electronic devices can also Instant video interaction through the server, which can be the server of the application.
  • the method provided by the embodiment of the present invention may also be applied to an interactive system including only an electronic device and a user, where the electronic device includes at least a video input module and a video display module, and the video input module may include a camera.
  • the video display module can include a display screen and at least an instant video program can be run in the electronic device.
  • the embodiment of the present invention may further include other application scenarios, and the specific application scenario is not limited in the embodiment of the present invention. It should be noted that in the embodiment of the present invention, the expressive effect animation is obtained by identifying the expression of the character in the instant video.
  • An embodiment of the present invention provides an animation method for loading an expressive effect in an instant video.
  • the method flow includes:
  • the loading position of the emoticon effect animation in the instant video frame is obtained according to the face detail feature point parameter in the instant video frame.
  • the method further includes:
  • the emoticon animation is loaded, and the instant video frame after the emoticon animation is loaded is displayed.
  • the method further includes:
  • An embodiment of the present invention provides a method for loading an expressive effect in an instant video, by identifying a facial expression in an instant video frame, generating a recognition result, and loading the acquired facial effect animation into a loading position in the instant video according to the recognition result.
  • the effect loading of the emoticon in the instant video is realized, which satisfies the requirement that the user interacts by loading the emoticon animation in the video call, increases the video interaction form, and improves the user experience; on the other hand, in the current video frame
  • the emoticon animation is loaded into the loading position in the instant video, so that the emoticon animation is more accurately loaded in the video, which improves the user experience; at the same time, the loaded emoticon animation can follow
  • the movement of the character avatar continuously recognizes the position, so that the expression special effect animation can follow the change of the character and correspondingly change, thereby improving the user experience; in addition, by recognizing the facial expression in the instant video frame, the recognition result is generated, according to the recognition result.
  • An embodiment of the present invention provides an animation method for loading an expressive effect in an instant video, as shown in FIG. 2 As shown, the method flow includes:
  • the face detail feature point parameter is used to describe the outline of the face detail, and the face detail includes at least an eye, a mouth, an eyebrow, and a nose.
  • other face details may be included, and embodiments of the present invention do not limit specific face details.
  • the face feature point parameter is determined by the face detail feature point coordinate and the texture feature point coordinate corresponding to the face detail feature point.
  • the face detail feature point parameter is determined by the face detail feature point coordinate and the texture feature point coordinate corresponding to the face detail feature point.
  • the face detail feature parameter may further include a scale and a direction of the feature point indicated by the feature point in at least the face face, and may further include other specific face feature parameters of the embodiment of the present invention. Not limited.
  • texture feature points are obtained near each feature point, and the texture feature points are used to uniquely determine the feature points, and the texture feature points do not change with changes in light, angle, and the like.
  • the embodiment of the present invention determines the face feature point by determining the texture feature. Because the texture feature point describes the region where the feature point is located, the texture feature point can be used to uniquely determine the feature point, so that the feature point and the texture feature point are determined according to the feature point and the texture feature point.
  • the face detail feature parameters of the facial expression can ensure that the feature points in the instant video are in the same position as the actual feature points, ensuring the recognition quality of the image details, thereby improving the reliability of obtaining the feature parameters of the face details.
  • feature points and texture feature points can be extracted from the face by a preset extraction model or an extraction algorithm.
  • feature points and textures can be extracted from the face by other means.
  • Feature points, the specific extraction model, the extraction algorithm, and the extraction method are not limited in the embodiment of the present invention.
  • the process may be:
  • a acquiring at least one feature point coordinate and at least one texture feature point coordinate for describing the at least one face detail feature point parameter according to the at least one face detail feature point parameter;
  • d Generate a feature point vector corresponding to at least one face detail feature point parameter according to at least one feature point coordinate and at least one texture feature point coordinate in the standard pose matrix.
  • the face expression displayed by the calculation result is the generated recognition result.
  • facial expressions in the current instant video frame are obtained in other manners, and the specific manner is not limited in the embodiment of the present invention.
  • step 201 to step 202 a process of identifying a facial expression in an instant video frame and generating a recognition result may be implemented in other manners, in addition to the manner of the foregoing process, the embodiment of the present invention is Specifically, the process of recognizing the facial expression in the instant video frame and generating the recognition result is not limited.
  • the embodiment of the present invention eliminates the influence of external factors such as illumination and angle on the instant video face by acquiring at least one feature point and at least one texture feature point in the standard pose matrix, so that the acquired feature points and texture feature points are more Comparability makes it more accurate to get animated special effects animations in the instant video through the recognized expressions.
  • an expression special effect animation corresponding to the facial expression is stored in advance.
  • facial expression is “smile”
  • pre-stored expression special effect animation corresponding to “smile”
  • the facial expression is “haha laugh”
  • pre-stored “haha laugh” is obtained.
  • Corresponding emoticon animations in addition to this, can also include emoticon animations corresponding to other facial expressions, which are not listed here.
  • step 203 is a process of acquiring an animation of the expression effect to be loaded according to the recognition result.
  • the process may be implemented in other manners, and the specific manner of the embodiment of the present invention is not limited.
  • the emoticon animation can also be obtained by acquiring the similarity between the facial expression and the pre-stored facial expression special effect animation.
  • the similarity between the facial expression and the pre-stored facial expression special effect animation is greater than or equal to a preset threshold, determining that the pre-stored facial expression special effect animation corresponds to the facial expression; if less than a preset threshold, Then, it is determined that there is no pre-stored facial expression special effect animation corresponding to the facial expression, and the process ends.
  • the embodiment of the present invention can determine the expression special effect animation corresponding to the facial expression by acquiring the similarity between the facial expression and the pre-stored facial expression special effect animation, thereby improving the efficiency of the expression of the expressive effect animation, and avoiding the Some user facial expressions are completely incompatible with the effects of animation effects, which improves the user experience.
  • the method further includes: displaying the prompt information to the user, where the prompt information may be displayed in the form of a text to the user to display the loading of the special effect animation, or may be displayed to the user by other means, and the embodiment of the present invention does not limit the specific manner. .
  • the face detail feature point parameter is determined by the feature point coordinate and the texture feature point coordinate corresponding to the feature point, determining the face detail feature point in the instant video frame by using the face detail feature parameter The coordinates, thereby obtaining the loading position of the emoticon effect animation in the instant video frame according to the coordinates of the face detail feature point in the instant video frame.
  • the process of acquiring the coordinates of the feature points of the face is the same as the process described in step 201, and details are not described herein.
  • obtaining the face detail by acquiring the face detail feature point parameter, and obtaining the face detail feature point parameter is determined by the face detail feature point coordinate and the texture feature point coordinate. Therefore, the coordinates of the feature points of the face are determined while determining the parameters of the feature points of the face, so that the manner of obtaining the loading position is more accurate and concise, and does not need to be obtained again.
  • This coordinate can be used to determine the loading position of the emoticon animation, which improves the user experience and reduces the operation steps.
  • the emoticon animation and the loading location are sent to other electronic devices that perform real-time video interaction with the electronic device, and the message may be loaded into the transparent message between the electronic device and the other electronic device, and the transparent transmission is adopted.
  • the message implements a process of sending the emoticon animation and the loading position data to other electronic devices that perform real-time video interaction with the electronic device, and the emoticon animation and loading position data may be characteristic parameters of the emoticon effect animation and the loading position, or may be Instructions for emoticon animation and instructions for loading positional feature parameters.
  • the loading instruction and the loading position of the emoticon animation may be sent to other electronic devices that perform real-time video interaction with the electronic device, or the loading instruction and loading position of the emoticon animation may be forwarded to other electronic devices via the server. device.
  • the loading instruction data of the expression special effect animation and the expression special effect animation uniquely correspond.
  • the special effect instruction takes up less memory and the transmission speed is faster than sending the emoticon effect animation itself, so the loading instruction and loading of the emoticon effect animation can be performed.
  • the location is sent to other electronic devices or servers that perform instant video interaction with the electronic device, improving the synchronization and efficiency of the expression loading, and improving the user experience.
  • the method further includes
  • the electronic device loads the instruction according to the facial expression effect expression triggered by the user, the electronic device loads the recognized facial effect animation into the loading position.
  • the electronic device loads the expression special effect animation to the loading position according to the loading instruction triggered by the user, and simultaneously sends the loaded characteristic effect animation and the loading position characteristic parameter to the electronic device through the transparent transmission message.
  • Other electronic devices for instant video interaction are possible.
  • the server loads the emoticon animation to the loading location according to the loading parameter and the feature parameter of the loading location, and then sends the video of the loaded emoticon animation to the electronic for video interaction.
  • the device causes the emoticon animation to be displayed on the display of the electronic device.
  • step of performing special effects loading by the server can save the system resources and processing resources of the electronic device compared to the step of the electronic device performing the loading special effect.
  • the emoticon animation is loaded by the server. Since the server can store all the special effect data, the server is used to load the special effect data, and the special effect data is loaded by the electronic device, and the electronic device does not store some special effect data. , saving storage resources and network resources of electronic devices.
  • the method further includes:
  • the electronic device obtains the cancellation instruction input by the user through the receiving module of the user, and obtains the cancellation instruction input by the user in other manners.
  • the embodiment of the present invention does not limit the specific manner for obtaining the cancellation instruction input by the user.
  • the elimination instruction is used to indicate an animation of the expression effect that the user wants to eliminate.
  • the user can eliminate it by clicking the erase icon on the video interface and clicking on the special effect animation to be eliminated.
  • the user can also trigger the cancellation command by clicking the function key with the return function.
  • the embodiment of the present invention does not limit the location of the specific icon and the icon.
  • the special effect data corresponding to the expression special effect animation indicated by the elimination instruction is deleted.
  • the user can eliminate or cancel the already loaded expression special effect animation, further satisfying the user's individual needs and improving the user interaction experience.
  • the electronic device sends the cancellation instruction to the other electronic device through the sending module of the electronic device, and the cancellation instruction is sent to the other electronic device by using other methods.
  • the specific manner of the embodiment of the present invention is not limited.
  • the image further describes the method for loading the expressive effect animation in the instant video provided by the embodiment of the present invention.
  • FIG. 3 it is assumed that the current user's expression is angry, and after the recognition of the current user's expression is angry, in FIG. 3
  • the instant video interface shown in the first frame the corresponding expressive effect animation is loaded, and the number of frames in which the expressive effect animation continues is 5 frames.
  • FIG. 3 it is assumed that the current user's expression is angry, and after the recognition of the current user's expression is angry, in FIG. 3
  • the instant video interface shown in the first frame the corresponding expressive effect animation is loaded, and the number of frames in which the expressive effect animation continues is 5 frames.
  • the expression of the user who performs real-time video interaction with the current user is shy, and after the recognition is that the user's expression is shy, the instant shown in the first frame in FIG.
  • the shyness corresponding emoticon animation is loaded in the video interface, and the number of frames of the emoticon animation is 4 frames.
  • the current user's expression is recognized in the instant video frames before the first frame and the first frame.
  • the emoticon effects loaded on the five live video frames after the first frame, and after the fifth frame, return to the instant video interface that does not load the emoticon animation as shown in the first frame.
  • the users who process the instant video interaction are the first user and the second user respectively, and if the expression of the first user is recognized as angry, the instant displayed by the electronic device of the second user is
  • the video interface can be as shown in FIG. 3, and the first user realizes the electronic device of the first user after switching the user in the instant video interface by clicking the small video window in the upper right corner of the instant video interface shown in the first frame in FIG.
  • the displayed instant video interface may also be as shown in FIG. 3; in some scenes, if the expression of the first user is recognized as angry and the expression of the second user is recognized to be shy, the instant video displayed by the electronic device of the second user
  • the interface can be as shown in FIG. 3, and the instant video interface displayed by the electronic device of the first user can be as shown in FIG. 4.
  • the invention provides an expression special effect loading method in instant video, which generates a recognition result by recognizing a facial expression in an instant video frame, and loads the acquired facial effect animation into a loading position in the instant video according to the recognition result.
  • the effect loading of the emoticon in the instant video satisfies the requirement that the user interacts by loading the emoticon animation in the video call, increases the video interaction form, and improves the user experience; on the other hand, by determining in the current video frame Emoticon effect animation After loading the position, the emoticon animation is loaded into the loading position in the instant video, so that the emoticon animation is more accurately loaded in the video, which improves the user experience; at the same time, the loaded emoticon animation can follow the movement of the character's avatar continuously.
  • the emoticon effect animation can follow the change of the character and correspondingly change, thereby improving the user experience; in addition, by recognizing the facial expression in the instant video frame, generating the recognition result, and acquiring the emoticon effect animation to be loaded according to the recognition result Automatically load expression effects animation, which simplifies the operation steps and improves the user experience compared with the manual loading method.
  • the embodiment of the present invention determines the face feature point by determining the texture feature. Because the texture feature point describes the region where the feature point is located, the texture feature point can be used to uniquely determine the feature point, so that the feature point and the texture feature point are determined according to the feature point.
  • the face detail feature parameter describing the facial expression can ensure that the feature point in the instant video is in the same position as the actual feature point, ensuring the recognition quality of the image detail, thereby improving the reliability of obtaining the feature parameter of the face detail. Sex.
  • the embodiment of the present invention eliminates the influence of external factors such as illumination and angle on the instant video face by acquiring at least one feature point and at least one texture feature point in the standard pose matrix, so that the acquired feature point and the texture feature point are obtained. More comparable, making it easier to get animated effects animations from recognized faces in live video.
  • the embodiment of the present invention can determine the expression special effect animation corresponding to the facial expression by acquiring the similarity between the facial expression and the pre-stored facial expression special effect animation, thereby improving the efficiency of obtaining the expressive effect animation and avoiding The user experience is not improved without the effect animation that is completely consistent with some facial expressions, which improves the user experience.
  • the embodiment of the present invention acquires the face detail by acquiring the face detail feature point parameter, and obtains the face detail feature point parameter through the face detail feature point coordinate and The texture feature point coordinates are determined, so, the coordinates of the face detail feature point are determined while determining the face detail feature point parameter, so that the manner of acquiring the loading position is more accurate and concise, and the determination is not needed again, and the coordinate may be Used to determine the loading position of the emoticon animation, which improves the user experience and reduces the operation steps.
  • the special effect instruction occupies less memory and the transmission speed is faster than sending the expression special effect animation itself, so the effect can be made by the expression
  • the animation loading instruction and the loading location are sent to other electronic devices or servers that perform real-time video interaction with the electronic device, thereby improving the synchronization and efficiency of the expression loading. High user experience.
  • the emoticon animation is loaded by the server. Since the server can store all the special effect data, the server is used to load the special effect data, and the special effect data is loaded by the electronic device, and the electronic device does not store some special effect data. , saving storage resources and network resources of electronic devices.
  • the embodiment of the present invention eliminates the special effect animation according to the instruction of the elimination instruction, so that the user can eliminate or cancel the already loaded expression special effect animation, further satisfying the user's personalized requirement, and improving the user interaction experience.
  • An embodiment of the present invention provides a method for displaying an expressive effect in an instant video.
  • the method flow includes:
  • the electronic device can receive the animation and loading position of the expression sent by the other electronic device, and obtain the animation and the loading position according to the facial expression in the instant video frame, and then obtain the animation and the effect animation. Load location.
  • the loading position is determined by other electronic devices according to the current video frame, and then sent to the electronic device, and the expressive effect animation is obtained by the other electronic device by recognizing the facial expression in the instant video frame, and then sending to the present Electronic equipment.
  • the electronic device may further include:
  • step 301 is the same as step 301, and details are not described herein again.
  • Step 602. Determine, according to the expression special effect animation instruction, whether the electronic device stores the special effect animation indicated by the expression special effect animation instruction, and if the special effect animation indicated by the expression special effect animation instruction is stored, execute step 603; if not, execute Step 604.
  • the plurality of tables pre-stored by the emoticon animation instruction and the electronic device may be The special effect animation instruction is compared to determine whether the electronic device stores the special effect indicated by the expression special effect animation instruction, and the specific determination manner is not limited in the embodiment of the present invention.
  • step is the same as step 302, and details are not described herein again.
  • step 604. Download the special effect animation indicated by the emoticon animation instruction from the server, and after step 604, perform step 603.
  • the specific download process is not limited in the embodiment of the present invention.
  • the electronic device and the server consume a large amount of storage space due to the need to store a large amount of special effects animation, thereby improving the user experience and speeding up the transmission.
  • An embodiment of the present invention provides a method for displaying an expressive effect in an instant video.
  • the recognition result is generated by recognizing a facial expression in an instant video frame, and the acquired facial effect animation is loaded to a loading position in the instant video according to the recognition result.
  • the effect loading of the emoticon in the instant video is realized, which satisfies the requirement that the user interacts by loading the emoticon animation in the video call, increases the video interaction form, and improves the user experience; on the other hand, in the current video frame
  • the emoticon animation is loaded into the loading position in the instant video, so that the emoticon animation is more accurately loaded in the video, which improves the user experience; at the same time, the loaded emoticon animation can follow
  • the movement of the character avatar continuously recognizes the position, so that the expression special effect animation can follow the change of the character and correspondingly change, thereby improving the user experience; in addition, by recognizing the facial expression in the instant video frame, the recognition result is generated, according to the recognition result.
  • An embodiment of the present invention provides an electronic device 6.
  • the electronic device 6 includes:
  • the identification module 61 is configured to identify a facial expression in an instant video frame, and generate a recognition result
  • the obtaining module 62 is configured to acquire an emoticon effect animation to be loaded according to the recognition result
  • a determining module 63 configured to determine a loading position of the emoticon effect animation in the instant video frame
  • the sending module 64 is configured to send an emoticon animation and a loading location to other electronic devices.
  • a loading module 65 configured to load an emoticon effect animation according to the loading position
  • the display module 66 is configured to display the live video after the animation of the emoticon effect is loaded.
  • the identification module 61 is specifically configured to:
  • the obtaining module 62 is specifically configured to:
  • the obtaining module 63 is further configured to:
  • the loading position of the emoticon effect animation in the instant video frame is obtained according to the face detail feature point parameter in the instant video frame.
  • the device further includes an obtaining module, configured to obtain a cancellation instruction input by the user;
  • the device further includes a deletion module for deleting the special effect animation indicated by the elimination instruction;
  • the sending module 64 is also used to send a cancellation command to other electronic devices.
  • the embodiment of the invention provides an electronic device, which generates a recognition result by recognizing a facial expression in an instant video frame, and loads the acquired facial effect animation into a loading position in the instant video according to the recognition result, thereby realizing the instant video.
  • the effect loading of the expression is satisfied, which satisfies the requirement that the user interacts by loading the emoticon animation in the video call, increases the video interaction form, and improves the user experience; on the other hand, determines the corresponding emoticon animation in the current video frame.
  • the emoticon animation is loaded into the loading position in the instant video, so that the emoticon animation is more accurately loaded in the video and improves the user experience; at the same time, the loaded emoticon animation can follow the movement of the character avatar continuously.
  • the emoticon effect animation can follow the change of the character and correspondingly change, thereby improving the user experience; in addition, by recognizing the facial expression in the instant video frame, generating the recognition result, and acquiring the emotic effect to be loaded according to the recognition result Animation, automate Upload expression effects Animation, compared to manual loading, simplifies the steps and improves the user experience.
  • An embodiment of the present invention provides an electronic device 7, as shown in FIG. 7, including: a video input module 71, a video output module 72, a transmitting module 73, a receiving module 74, a memory 75, and a video input module 71 and a video output module.
  • the sending module 73, the receiving module 74 and the processor 75 connected to the memory 75, wherein the memory 75 stores a set of program codes, and the processor 76 is configured to call the program code stored in the memory 75 to perform the following operations:
  • the control sending module 73 sends the emoticon effect animation and the loading position to other electronic devices;
  • the program code stored in the memory 75 of the processor 76 is also used to control the video input module 71 to receive the instant video frame.
  • the processor 76 is configured to call the program code stored in the memory 75, and perform the following operations:
  • the processor 76 is configured to call the program code stored in the memory 75, and perform the following operations:
  • the processor 76 is configured to call the program code stored in the memory 75, and perform the following operations:
  • the loading position of the emoticon effect animation in the instant video frame is obtained according to the face detail feature point parameter in the instant video frame.
  • the processor 76 is configured to call the program code stored in the memory 75, and perform the following operations:
  • the emoticon animation is loaded, and the video output module 72 is controlled to display the instant video frame after the emoticon animation is loaded.
  • the processor 76 is configured to call the program code stored in the memory 75, and perform the following operations:
  • the control receiving module 74 acquires a cancellation instruction input by the user
  • the control transmitting module 73 transmits a cancel command to other electronic devices.
  • the embodiment of the invention provides an electronic device, which generates a recognition result by recognizing a facial expression in an instant video frame, and loads the acquired facial effect animation into a loading position in the instant video according to the recognition result, thereby realizing the instant video.
  • the effect loading of the expression is satisfied, which satisfies the requirement that the user interacts by loading the emoticon animation in the video call, increases the video interaction form, and improves the user experience; on the other hand, determines the corresponding emoticon animation in the current video frame.
  • the emoticon animation is loaded into the loading position in the instant video, so that the emoticon animation is more accurately loaded in the video and improves the user experience; at the same time, the loaded emoticon animation can follow the movement of the character avatar continuously.
  • the emoticon effect animation can follow the change of the character and correspondingly change, thereby improving the user experience; in addition, by recognizing the facial expression in the instant video frame, generating the recognition result, and acquiring the emotic effect to be loaded according to the recognition result Animation, automate Upload expression effects animation, compared with the manual loading of ways to simplify the procedure and improve the user experience.
  • An embodiment of the present invention provides an electronic device 8.
  • the electronic device 6 includes:
  • the receiving module 81 is configured to receive an animation effect and a loading position sent by other electronic devices;
  • a loading module 82 configured to load an emoticon animation to a loading location
  • a display module 83 configured to display an instant video frame after loading the animation effect animation
  • the loading position is determined by other electronic devices according to the current video frame, and then sent to the electronic device, and the expressive effect animation is obtained by the other electronic device by recognizing the facial expression in the instant video frame, and then sending to the present Electronic equipment.
  • the electronic device 8 further includes:
  • the receiving module is further configured to receive a cancellation instruction sent by another electronic device
  • the loading module is further configured to delete the special effect animation indicated by the cancellation instruction.
  • An embodiment of the present invention provides an electronic device that generates a recognition result by recognizing a facial expression in an instant video frame, and loads the acquired facial effect animation into the instant video according to the recognition result.
  • the loading position enables the emoticon loading in the instant video, which satisfies the user's need to interact by loading the emoticon animation in the video call, increases the video interaction form, and improves the user experience; on the other hand, through the current video
  • the emoticon effect animation is loaded into the loading position in the instant video, so that the emoticon effect animation is more accurately loaded in the video, thereby improving the user experience; at the same time, loading the emoticon effect animation
  • the position can be continuously recognized following the movement of the character avatar, so that the expression special effect animation can follow the change of the character and correspondingly change, thereby improving the user experience; in addition, by recognizing the facial expression in the instant video frame, the recognition result is generated according to The recognition result acquires the animation effect animation to be loaded, and automatically loads the expression
  • An embodiment of the present invention provides an electronic device, as shown in FIG. 9, including a video output module 91, a transmitting/receiving module 92, a memory 93, and a processor connected to the video output module 91, the transmitting/receiving module 92, and the memory 93.
  • the memory 93 stores a set of program codes
  • the processor 94 is configured to call the program code stored in the memory 93 to perform the following operations:
  • the loading position is determined by other electronic devices according to the current video frame, and then sent to the electronic device, and the expressive effect animation is obtained by the other electronic device by recognizing the facial expression in the instant video frame, and then sending to the present Electronic equipment.
  • the processor 94 is configured to call the program code stored in the memory 93, and perform the following operations:
  • the embodiment of the present invention provides a special effect loading method in an instant video, which generates a recognition result by recognizing a facial expression in an instant video frame, and loads the acquired facial effect animation into a loading position in the instant video according to the recognition result.
  • a special effect loading method in an instant video which generates a recognition result by recognizing a facial expression in an instant video frame, and loads the acquired facial effect animation into a loading position in the instant video according to the recognition result.
  • the electronic device provided by the foregoing embodiment triggers the expression special effect loading method in the instant video
  • only the division of each functional module is used as an example.
  • the function may be assigned differently according to needs.
  • the function module is completed, that is, the internal structure of the electronic device is divided into different functional modules to complete all or part of the functions described above.
  • the electronic device provided by the foregoing embodiment is the same as the embodiment of the method for loading an animation effect in the instant video. The specific implementation process is described in detail in the method embodiment, and details are not described herein again.
  • a person skilled in the art may understand that all or part of the steps of implementing the above embodiments may be completed by hardware, or may be instructed by a program to execute related hardware, and the program may be stored in a computer readable storage medium.
  • the storage medium mentioned may be a read only memory, a magnetic disk or an optical disk or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本发明公开一种即时视频中的表情特效动画加载方法和电子设备,属于视频领域。所述方法包括:识别即时视频帧中的人脸表情,生成识别结果;根据识别结果,获取所要加载的表情特效动画;确定表情特效动画在即时视频帧中的加载位置;发送表情特效动画和加载位置至其他电子设备。满足了用户对即时视频通话的个性化需求,增加了视频互动形式,提高了用户体验。

Description

一种即时视频中的表情特效动画加载方法和电子设备 技术领域
本发明涉及视频领域,特别涉及一种即时视频中的表情特效动画加载方法和电子设备。
背景技术
现在用户可以通过电子设备的视频功能进行视频对话,但是在视频通话过程中,用户双方只能通过视频框看到对方及视频的背景。使得视频通话过程中,视频表现形式单一,用户难以在视频通话中用简单的方式添加表情互动特效动画,所以现在需要提供一种能够给视频中人脸添加表情特效的简单有效的方法。
由于现有技术未提供一种能够给视频中人脸添加表情特效的简单有效的方法,使得无法在即时视频中实时添加表情特效动画,因而不能满足用户在视频通话过程中进行互动的需求,用户体验效果低。
发明内容
为了满足用户对即时视频多样化的需求,提高用户体验效果,本发明实施例提供了一种即时视频中的表情特效动画加载方法和电子设备。所述技术方案如下:
第一方面,提供了一种即时视频中的表情特效动画加载方法,所述方法包括:
识别即时视频帧中的人脸表情,生成识别结果;
根据所述识别结果,获取所要加载的表情特效动画;
确定所述表情特效动画在即时视频帧中的加载位置;
发送所述表情特效动画和所述加载位置至其他电子设备。
结合第一方面,在第一种可能的实现方式中,所述识别即时视频帧中的人 脸表情包括:
获取即时视频帧中的人脸细节特征点参数;
根据所述人脸细节特征点参数,获取当前即时视频帧中的人脸表情。
结合第一方面的第一种可能的实现方式,在第二种可能的实现方式中,所述根据所述识别结果,获取所要加载的表情特效动画包括:
根据所述人脸表情,获取与所述人脸表情对应的表情特效动画。
结合第一方面的第一种或第二种可能的实现方式,在第三种可能的实现方式中,所述确定所述表情特效动画在即时视频帧中的加载位置包括:
根据所述即时视频帧中的人脸细节特征点参数,获取所述表情特效动画在即时视频帧中的加载位置。
结合第一方面的第三种可能的实现方式,在第四种可能的实现方式中,所述方法还包括:
根据所述加载位置,加载所述表情特效动画,并显示加载表情特效动画后的即时视频帧。
结合第一方面的第四种可能的实现方式,在第五种可能的实现方式中,所述方法还包括:
获取用户输入的消除指令;
删除所述消除指令所指示的特效动画;
向所述其他电子设备发送所述消除指令。
第二方面,提供了一种电子设备,所述电子设备包括:
识别模块,用于识别即时视频帧中的人脸表情,生成识别结果;
获取模块,用于根据所述识别结果,获取所要加载的表情特效动画;
确定模块,用于确定所述表情特效动画在即时视频帧中的加载位置;
发送模块,用于发送所述表情特效动画和所述加载位置至其他电子设备。
加载模块,用于根据所述加载位置,加载所述表情特效动画;
显示模块,用于显示所述加载表情特效动画后的即时视频帧。
结合第二方面,在第一种可能的实现方式中,所述识别模块具体用于:
获取即时视频帧中的人脸细节特征点参数;
根据所述人脸细节特征点参数,获取当前即时视频帧中的人脸表情。
结合第二方面的第一种可能的实现方式,在第二种可能的实现方式中,所述获取模块具体用于:
根据所述人脸表情,获取与所述人脸表情对应的表情特效动画。
结合第二方面的第一种或第二种可能的实现方式,在第三种可能的实现方式中,所述获取模块还具体用于:
根据所述即时视频帧中的人脸细节特征点参数,获取所述表情特效动画在即时视频帧中的加载位置。
结合第二方面,在第四种可能的实现方式中,
所述设备还包括接收模块,用于获取用户输入的消除指令;
所述设备还包括删除模块,用于删除所述消除指令所指示的特效动画;
所述发送模块还用于向所述其他电子设备发送所述消除指令。
第三方面,提供了一种电子设备,包括视频输入模块、视频输出模块、发送模块、接收模块、存储器以及与所述视频输入模块、所述视频输出模块、所述发送模块、所述接收模块和所述存储器连接的处理器,其中,所述存储器存储一组程序代码,所述处理器用于调用所述存储器中存储的程序代码,执行以下操作:
识别即时视频中的人脸表情,生成识别结果;
根据所述识别结果,获取所要加载的表情特效动画;
确定所述表情特效动画在即时视频帧中的加载位置;
控制发送模块发送所述表情特效动画和所述加载位置至其他电子设备;
其中,所述处理器调用所述存储器中存储的程序代码还用于控制视频输入模块接收即时视频帧。
结合第三方面,在第一种可能的实现方式中,所述处理器用于调用所述存储器中存储的程序代码,执行以下操作:
获取即时视频中的人脸细节特征点参数;
根据所述人脸细节特征点参数,获取当前即时视频帧中的人脸表情。
结合第三方面的第一种可能的实现方式,在第二种可能的实现方式,所述处理器用于调用所述存储器中存储的程序代码,执行以下操作:
根据所述人脸表情,获取与所述人脸表情对应的表情特效动画。
结合第三方面的第一种或第二种可能的实现方式,在第三种可能的实现方式,所述处理器用于调用所述存储器中存储的程序代码,执行以下操作:
根据所述即时视频帧中的人脸细节特征点参数,获取所述表情特效动画在即时视频帧中的加载位置。
结合第三方面的第三种可能的实现方式,在第四种可能的实现方式,所述处理器用于调用所述存储器中存储的程序代码,执行以下操作:
根据所述加载位置,加载所述表情特效动画,并控制视频输出模块显示加载表情特效动画后的即时视频帧。
结合第三方面的第四种可能的实现方式,在第五种可能的实现方式,所述处理器用于调用所述存储器中存储的程序代码,执行以下操作:
控制接收模块获取用户输入的消除指令;
删除所述消除指令所指示的特效动画;
控制发送模块向所述其他电子设备发送所述消除指令。
第四方面,提供了一种即时视频中的表情特效显示方法,所述方法包括:
获取表情特效动画和加载位置;
将所述表情特效动画加载至所述加载位置,并显示所述加载表情特效动画后的即时视频帧;
其中,所述加载位置是在当前视频帧中确定的,所述表情特效动画是通过对即时视频帧中的人脸表情进行识别获取的。
结合第四方面,在第一种可能的实现方式中,所述方法还包括:
接收其他电子设备发送的消除指令;
删除所述消除指令所指示的特效动画。
第五方面,提供了一种电子设备,所述电子设备包括:
接收模块,用于接收其他电子设备发送的表情特效动画和加载位置;
加载模块,用于将所述表情特效动画加载至所述加载位置;
显示模块,用于显示所述加载表情特效动画后的即时视频帧;
其中,所述加载位置是在当前视频帧中确定的,所述表情特效动画是通过对即时视频帧中的人脸表情进行识别获取的。
结合第五方面,在第一种可能的实现方式中,
所述接收模块还用于接收所述其他电子设备发送的消除指令;
所述加载模块还用于删除所述消除指令所指示的特效动画。
第六方面,提供了一种电子设备,包括视频输出模块、发送/接收模块、存储器以及与所述视频输出模块、所述发送/接收模块和所述存储器连接的处理器,其中,所述存储器存储一组程序代码,所述处理器用于调用所述存储器中存储的程序代码,执行以下操作:
接收其他电子设备发送的表情特效动画和加载位置;
将所述表情特效动画加载至所述加载位置,并控制视频输出模块显示所述加载表情特效动画后的即时视频帧;
其中,所述加载位置是在当前视频帧中确定的,所述表情特效动画是通过对即时视频帧中的人脸表情进行识别获取的。
结合第六方面,在第一种可能的实现方式中,所述处理器用于调用所述存储器中存储的程序代码,执行以下操作:
接收所述其他电子设备发送的消除指令;
删除所述消除指令所指示的特效动画。
本发明实施例提供了一种即时视频中的表情特效动画加载方法和电子设备。包括:识别即时视频帧中的人脸表情,生成识别结果;根据识别结果,获取所要加载的表情特效动画;确定表情特效动画在即时视频帧中的加载位置;发送表情特效动画和加载位置至其他电子设备。根据本发明实施例所提供的方法,通过识别即时视频帧中的人脸表情,生成识别结果,根据识别结果,将获取的表情特效动画加载至即时视频中的加载位置,实现了在即时视频中进行表 情特效加载,满足了用户在视频通话中通过加载表情特效动画进行互动的需求,增加了视频互动形式,提高了用户体验;另一方面,通过在当前视频帧中确定与表情特效动画对应的加载位置后,将表情特效动画加载至即时视频中的加载位置,从而使得表情特效动画在视频中加载更加准确,提高了用户体验;同时,使得加载表情特效动画可以跟随人物头像的移动而不断的识别该位置,使得表情特效动画可以跟随人物的变化而相应的变化,提高了用户体验;另外,通过识别即时视频帧中的人脸表情,生成识别结果,根据识别结果获取所要加载的表情特效动画,进行自动加载表情特效动画,与手动加载的方式相比,简化了操作步骤,提高了用户体验。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例提供的一种即时视频中的表情特效加载方法流程图;
图2是本发明实施例提供的一种即时视频中的表情特效加载方法流程图;
图3是本发明实施例提供的一种即时视频界面变化示意图;
图4是本发明实施例提供的一种即时视频界面变化示意图;
图5是本发明实施例提供的一种即时视频中的表情特效加载方法流程图;
图6是本发明实施例提供的一种电子设备结构示意图;
图7是本发明实施例提供的一种电子设备结构示意图;
图8是本发明实施例提供的一种电子设备结构示意图;
图9是本发明实施例提供的一种电子设备结构示意图。
具体实施方式
为使本发明的目的、技术方案和优点更加清楚,下面将结合本发明实施例 中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本发明实施例提了一种即时视频中的表情特效动画加载方法,该方法应用于一种交互系统中,该交互系统至少包括两个电子设备,该两个电子设备之间可以通过运行程序进行即时视频通讯,其中,该电子设备可以是智能手机(Smart Phone),可以是平板电脑(Tablet Computer),还可以是其他电子设备,本发明实施例对具体的电子设备不加以限定。电子设备至少包括视频输入模块和视频显示模块,视频输入模块可以包括摄像头,视频显示模块可以包括显示屏。
该至少两个电子设备之间可以直接进行即时视频交互,可以通过蓝牙,WiFi等无线连接方式进行连接,也可以通过连接设备进行连接,该连接设备包括路由器等;该至少两个电子设备还可以通过服务器进行即时视频交互,该服务器可以是应用程序的服务器。
除此之外,本发明实施例提供的方法还可以应用于一种只包括电子设备和用户的交互系统中,其中,电子设备至少包括视频输入模块和视频显示模块,视频输入模块可以包括摄像头,视频显示模块可以包括显示屏,且电子设备中至少可以运行即时视频程序。本发明是实施例还可以包括其他应用场景,本发明实施例对具体的应用场景不加以限定。值得注意的是,在本发明实施例中,表情特效动画是通过对即时视频中的人物表情进行识别获取的。
实施例一
本发明实施例提供了一种即时视频中的表情特效动画加载方法,参见图1所示,方法流程包括:
101、识别即时视频帧中的人脸表情,生成识别结果。
具体的,获取即时视频帧中的人脸细节特征点参数;
根据人脸细节特征点参数,获取当前即时视频帧中的人脸表情。
102、根据识别结果,获取所要加载的表情特效动画。
具体的,根据人脸表情,获取与人脸表情对应的表情特效动画。
103、确定表情特效动画在即时视频帧中的加载位置。
具体的,根据即时视频帧中的人脸细节特征点参数,获取表情特效动画在即时视频帧中的加载位置。
104、发送表情特效动画和加载位置至其他电子设备。
可选的,方法还包括:
根据加载位置,加载表情特效动画,并显示加载表情特效动画后的即时视频帧。
可选的,方法还包括:
获取用户输入的消除指令;
删除消除指令所指示的特效动画;
向其他电子设备发送消除指令。
本发明实施例提供了一种即时视频中的表情特效加载方法,通过识别即时视频帧中的人脸表情,生成识别结果,根据识别结果,将获取的表情特效动画加载至即时视频中的加载位置,实现了在即时视频中进行表情特效加载,满足了用户在视频通话中通过加载表情特效动画进行互动的需求,增加了视频互动形式,提高了用户体验;另一方面,通过在当前视频帧中确定与表情特效动画对应的加载位置后,将表情特效动画加载至即时视频中的加载位置,从而使得表情特效动画在视频中加载更加准确,提高了用户体验;同时,使得加载表情特效动画可以跟随人物头像的移动而不断的识别该位置,使得表情特效动画可以跟随人物的变化而相应的变化,提高了用户体验;另外,通过识别即时视频帧中的人脸表情,生成识别结果,根据识别结果获取所要加载的表情特效动画,进行自动加载表情特效动画,与手动加载的方式相比,简化了操作步骤,提高了用户体验。
实施例二
本发明实施例提供了一种即时视频中的表情特效动画加载方法,参见图2 所示,方法流程包括:
201、获取即时视频帧中的人脸细节特征点参数。
具体的,由于人脸的表情是通过人脸细节来确定的,所以,该人脸细节特征点参数用于描述人脸细节的轮廓,人脸细节至少包括眼部、嘴部、眉毛和鼻子,除此之外,还可以包括其他的人脸细节,本发明实施例对具体的人脸细节不加以限定。
其中,该人脸特征点参数是通过该人脸细节特征点坐标和与该人脸细节特征点对应的纹理特征点坐标确定的。
从而通过人脸细节特征点坐标和与该人脸细节特征点对应的纹理特征点坐标确定人脸细节特征点参数。
除此之外,该人脸细节特征参数还可以包括该特征点在至少包括人脸面部中所指示的向量的尺度和方向,还可以包括其他,本发明实施例对具体的人脸细节特征参数不加以限定。
可选的,在每个特征点附近获取纹理特征点,纹理特征点用于唯一确定特征点,并且纹理特征点不随光线、角度等的变化而变化。
本发明实施例通过确定纹理特征来人脸细节特征点,因为纹理特征点描述了特征点所在区域,所以纹理特征点可以用于唯一确定特征点,使得根据特征点和纹理特征点确定用于描述人脸表情的人脸细节特征参数,从而可以保证即时视频中的特征点与实际特征点在同一个位置,确保了图像细节的识别质量,从而提高了获取人脸细节特征点参数的可靠性。
值得注意的是,可以通过预设的提取模型或者提取算法,从人脸中提取出特征点和纹理特征点,除此之外,还可以通过其他方式,从人脸中提取出特征点和纹理特征点,本发明实施例对具体的提取模型、提取算法以及提取方式不加以限定。
202、根据人脸细节特征点参数,获取当前即时视频帧中的人脸表情。
具体的,根据人脸细节特征点参数,获取与人脸细节特征点参数对应的特征向量,根据该特征向量,获取即时视频帧中的人脸表情,该过程可以为:
a、根据至少一个人脸细节特征点参数,获取用于描述该至少一个人脸细节特征点参数的至少一个特征点坐标和至少一个纹理特征点坐标;
b、根据该人脸的至少一个特征点坐标和至少一个纹理特征点坐标,获取即时视频帧中人脸的至少一个特征点和至少一个纹理特征点对应的当前姿态矩阵;
c、将当前姿态矩阵旋转为标准姿态矩阵,得到标准姿态矩阵下的至少一个特征点坐标和至少一个纹理特征点坐标;
d、根据标准姿态矩阵下的至少一个特征点坐标和至少一个纹理特征点坐标,生成与至少一个人脸细节特征点参数对应的特征点向量。
e、将该特征点向量输入预设算法中,获取计算结果显示的人脸表情;
该计算结果显示的人脸表情是生成的识别结果。
除此之外,还通过其他方式获取当前即时视频帧中的人脸表情,本发明实施例对具体的方式不加以限定。
需要说明的是,步骤201至步骤202是实现识别即时视频帧中的人脸表情,生成识别结果的过程,除了上述过程的方式之外,还可以通过其他方式实现该过程,本发明实施例对具体的识别即时视频帧中的人脸表情,生成识别结果的过程不加以限定。
本发明实施例通过在标准姿态矩阵中获取至少一个特征点和至少一个纹理特征点,从而排除了光照、角度等外界因素对即时视频人脸的影响,使得获取的特征点和纹理特征点更加有可比性,使得在即时视频中通过识别的表情获取表情特效动画更加准确。
203、根据人脸表情,获取与人脸表情对应的表情特效动画。
具体的,根据人脸表情,获取预先存储与该人脸表情对应的表情特效动画。
示例性的,若人脸表情为“微笑”,则获取预先存储的与“微笑”对应的表情特效动画;若人脸表情为“哈哈大笑”,则获取预先存储的与“哈哈大笑”对应的表情特效动画,除此之外,还可以包括与其他人脸表情对应的表情特效动画,此处不再一一列举。
需要说明的是,步骤203是根据识别结果,获取所要加载的表情特效动画的过程,除此之外,还可以通过其他方式实现该过程,本发明实施例对具体的方式不加以限定。
可选的,还可以通过获取该人脸表情与预先存储的人脸表情特效动画的相似度获取表情特效动画。
具体的,若该人脸表情与预先存储的人脸表情特效动画的相似度大于等于预设阈值,则确定该预先存储的人脸表情特效动画与该人脸表情对应;若小于预设阈值,则确定没有与该人脸表情对应的预先存储的人脸表情特效动画,则结束。
本发明实施例可以通过获取人脸表情与预先存储的人脸表情特效动画的相似度来确定该人脸表情对应的表情特效动画,从而提高了表情特效动画的获取的效率,避免了由于没有和某些人脸表情完全符合的表情特效动画而无法获取的情况,提高了用户体验效果。
可选的,还可以包括向用户显示提示信息,该提示信息可以为以文字的形式向用户显示加载表情特效动画失败,或通过其他方式向用户显示,本发明实施例对具体的方式不加以限定。
204、确定表情特效动画在即时视频帧中的加载位置。
具体的,由于人脸细节特征点参数是由该特征点坐标和对该特征点对应的纹理特征点坐标确定,所以,通过人脸细节特征参数,确定人脸细节特征点的在即时视频帧中的坐标,从而根据该人脸细节特征点在即时视频帧中的坐标,获取表情特效动画在即时视频帧中的加载位置。
其中,人脸细节特征点坐标的获取过程与步骤201所述的过程相同,此处再不加以赘述,
通过获取用于描述该人脸表情的人脸细节时,通过获取人脸细节特征点参数获取人脸细节,而获取人脸细节特征点参数通过人脸细节特征点坐标和纹理特征点坐标来确定,所以在确定人脸细节特征点参数的同时确定了人脸细节特征点的坐标,从而使得获取加载位置的方式更加准确简洁,不需要再次获取确 定,该坐标可以用来确定表情特效动画的加载位置,提高了用户体验,减少了操作步骤。
205、发送表情特效动画和加载位置至其他电子设备。
具体的,将表情特效动画和加载位置发送至与该电子设备进行即时视频交互的其他电子设备,可以将该消息加载至该电子设备与其他电子设备之间的透传消息中,通过该透传消息实现将表情特效动画和加载位置数据发送至与该电子设备进行即时视频交互的其他电子设备的过程,该表情特效动画和加载位置数据可以为表情特效动画和加载位置的特征参数,还可以是表情特效动画的指令和加载位置特征参数的指令。
可选的,还可以将该表情特效动画的加载指令与加载位置发送至与该电子设备进行即时视频交互的其他电子设备,或将该表情特效动画的加载指令和加载位置经服务器转发至其他电子设备。
其中,表情特效动画的加载指令数据与表情特效动画唯一对应。
通过发送表情特效动画指令和加载位置特征参数指令,相较于发送表情特效动画本身,特效指令所占内存较小,且传输速度更快,所以,可以通过将该表情特效动画的加载指令与加载位置发送至与该电子设备进行即时视频交互的其他电子设备或服务器,提高表情特效加载的同步性和效率,提高用户体验。
可选的,方法还包括
206、根据加载位置,加载表情特效动画,并显示加载表情特效动画后的即时视频帧。
具体的,电子设备根据用户触发的表情特效表情加载指令后,电子设备将识别获取的表情特效动画加载至加载位置。
电子设备根据用户触发的加载指令,在自身的加载模块进行将表情特效动画加载至加载位置,同时,可以通过透传消息将加载的表情特效动画和加载位置的特征参数发送至与该电子设备进行即时视频交互的其他电子设备。
可选的,服务器根据加载指令和加载位置的特征参数,将表情特效动画加载至加载位置,再将该加载的表情特效动画的视频发送至进行视频交互的电子 设备,并使得在电子设备的显示屏上显示该表情特效动画。
由于可以通过服务器执行特效加载的步骤,相比于电子设备执行加载特效的步骤,该方式可以节省电子设备的系统资源和处理资源。
同时,通过服务器加载表情特效动画,由于服务器可以存储所有的特效数据,所以用过服务器进行特效数据的加载,相比通过电子设备进行特效数据的加载,在电子设备未存储部分特效数据的场景下,节省了电子设备的存储资源和网络资源。
可选的,方法还包括:
207、获取用户输入的消除指令。
具体的,电子设备通过自身的接收模块获取用户输入的消除指令,还可以通过其他方式获取用户输入的消除指令,本发明实施例对具体的获取用户输入的消除指令的方式不加以限定。
其中,消除指令用于指示用户所要消除的表情特效动画。
用户可以通过点击视频界面上的消除图标,点击所要消除的特效动画进行消除。
除此之外,用户还可以通过点击具有返回功能的功能键触发消除指令,本发明实施例对具体的图标和图标所在位置不加以限定。
208、删除消除指令所指示的特效动画。
具体的,删除该消除指令所示的表情特效动画所对应的特效数据。
通过根据消除指令的指示,消除特效动画,从而使得用户可以对已经加载的表情特效动画进行消除或者撤销,进一步满足了用户的个性化需求,提高了用户交互体验。
209、向其他电子设备发送消除指令。
具体的,电子设备通过自身的发送模块向其他电子设备发送消除指令,还可以通过其他方式向其他电子设备发送消除指令,本发明实施例对具体的方式不加以限定。
为了使本领域技术人员更进一步了解本发明所提供的方法,下面将结合附 图对本发明实施例提供的一种即时视频中的表情特效动画加载方法进行进一步说明,参照图3所示,假设当前用户的表情为生气,在识别为当前用户的表情为生气之后,在图3中的第1帧所示的即时视频界面中加载生气多对应的表情特效动画,以该表情特效动画持续的帧数为5帧进行说明,在图3中,在第1帧和第1帧之前的即时视频帧中识别当前用户的表情为生气之后,对第1帧之后的五个即时视频帧所加载的表情动画特效,并在第6帧之后,返回如第1帧所示的不加载表情特效动画的即时视频界面。
可选的,还可以参照图4所示,假设与当前用户进行即时视频交互的用户的表情为害羞,在识别为该用户的表情为害羞之后,在图4中的第1帧所示的即时视频界面中加载害羞对应的表情特效动画,以该表情特效动画持续的帧数为4帧进行说明,在图4中,在第1帧和第1帧之前的即时视频帧中识别当前用户的表情为害羞之后,对第1帧之后的五个即时视频帧所加载的表情动画特效,并在第5帧之后,返回如第1帧所示的不加载表情特效动画的即时视频界面。
值得注意的是,在即时视频交互过程中,假设处理即时视频交互的用户分别为第一用户和第二用户,若识别第一用户的表情为生气,则第二用户的电子设备所显示的即时视频界面可以如图3所示,第一用户通过点击图3中的第1帧所示的即时视频界面右上角的小视频窗口,实现即时视频界面中的用户切换后,第一用户的电子设备所显示的即时视频界面也可以如图3所示;在部分场景下,若识别第一用户的表情为生气,识别第二用户的表情为害羞,则第二用户的电子设备所显示的即时视频界面可以如图3所示,第一用户的电子设备所显示的即时视频界面可以如图4所示。
本发明提供了一种即时视频中的表情特效加载方法,通过识别即时视频帧中的人脸表情,生成识别结果,根据识别结果,将获取的表情特效动画加载至即时视频中的加载位置,实现了在即时视频中进行表情特效加载,满足了用户在视频通话中通过加载表情特效动画进行互动的需求,增加了视频互动形式,提高了用户体验;另一方面,通过在当前视频帧中确定与表情特效动画对应的 加载位置后,将表情特效动画加载至即时视频中的加载位置,从而使得表情特效动画在视频中加载更加准确,提高了用户体验;同时,使得加载表情特效动画可以跟随人物头像的移动而不断的识别该位置,使得表情特效动画可以跟随人物的变化而相应的变化,提高了用户体验;另外,通过识别即时视频帧中的人脸表情,生成识别结果,根据识别结果获取所要加载的表情特效动画,进行自动加载表情特效动画,与手动加载的方式相比,简化了操作步骤,提高了用户体验。另外,本发明实施例通过确定纹理特征来人脸细节特征点,因为纹理特征点描述了特征点所在区域,所以纹理特征点可以用于唯一确定特征点,使得根据特征点和纹理特征点确定用于描述人脸表情的人脸细节特征参数,从而可以保证即时视频中的特征点与实际特征点在同一个位置,确保了图像细节的识别质量,从而提高了获取人脸细节特征点参数的可靠性。同时,本发明实施例通过在标准姿态矩阵中获取至少一个特征点和至少一个纹理特征点,从而排除了光照、角度等外界因素对即时视频人脸的影响,使得获取的特征点和纹理特征点更加有可比性,使得在即时视频中通过识别的表情获取表情特效动画更加准确。另外,本发明实施例可以通过获取人脸表情与预先存储的人脸表情特效动画的相似度来确定该人脸表情对应的表情特效动画,从而提高了表情特效动画的获取的效率,避免了由于没有和某些人脸表情完全符合的表情特效动画而无法获取的情况,提高了用户体验效果。同时,本发明实施例通过获取用于描述该人脸表情的人脸细节时,通过获取人脸细节特征点参数获取人脸细节,而获取人脸细节特征点参数通过人脸细节特征点坐标和纹理特征点坐标来确定,所以,所以在确定人脸细节特征点参数的同时确定了人脸细节特征点的坐标,从而使得获取加载位置的方式更加准确简洁,不需要再次获取确定,该坐标可以用来确定表情特效动画的加载位置,提高了用户体验,减少了操作步骤。同时,本发明实施例通过发送表情特效动画指令和加载位置特征参数指令,相较于发送表情特效动画本身,特效指令所占内存较小,且传输速度更快,所以,可以通过将该表情特效动画的加载指令与加载位置发送至与该电子设备进行即时视频交互的其他电子设备或服务器,提高表情特效加载的同步性和效率,提 高用户体验。同时,通过服务器加载表情特效动画,由于服务器可以存储所有的特效数据,所以用过服务器进行特效数据的加载,相比通过电子设备进行特效数据的加载,在电子设备未存储部分特效数据的场景下,节省了电子设备的存储资源和网络资源。另外,本发明实施例通过根据消除指令的指示,消除特效动画,从而使得用户可以对已经加载的表情特效动画进行消除或者撤销,进一步满足了用户的个性化需求,提高了用户交互体验。
实施例三
本发明实施例提供了一种即时视频中的表情特效显示方法,参见图5所示,方法流程包括:
501、接收其他电子设备发送的表情特效动画和加载位置。
具体的,电子设备可以接收其他电子设备发送的表情特效动画和加载位置,也可以在本电子设备根据即时视频帧中的人脸表情获取表情特效动画和加载位置后,再获取该表情特效动画和加载位置。
502、将表情特效动画加载至加载位置,并显示加载表情特效动画后的即时视频。
其中,加载位置是其他电子设备根据当前视频帧中确定后,再发送至本电子设备的,表情特效动画是其他电子设备通过对即时视频帧中的人脸表情进行识别获取后,再发送至本电子设备的。
可选的,若电子设备发送表情特效动画加载指令与加载位置,,则还可以包括:
601、接收电子设备发送的表情特效动画指令与加载位置。
具体的,该步骤与步骤301相同,此处再不加以赘述。
602、根据该表情特效动画指令,判断本电子设备是否存储该表情特效动画指令所指示的特效动画,若存储该表情特效动画指令所指示的特效动画,则执行步骤603;若未存储,则执行步骤604。
具体的,可以通过将该表情特效动画指令与本电子设备预先存储的多个表 情特效动画指令进行对比,来判断本电子设备是否存储该表情特效动画指令所指示的特效,本发明实施例对具体的判断方式不加以限定。
603、将该表情特效动画指令所指示的表情特效动画加载至确定的加载位置。
具体的,该步骤与步骤302相同,此处再不加以赘述。
604、从服务器下载该表情特效动画指令所指示的特效动画,并在步骤604之后,执行步骤603。
具体的,本发明实施例对具体的下载过程不加以限定。
由于表情特效动画可以从服务器下载,从而减少了电子设备和服务器由于需要存储大量的特效动画而耗费大量的存储空间,从而提高了用户体验,加快了传输速度。
本发明实施例提供了一种即时视频中的表情特效显示方法,通过识别即时视频帧中的人脸表情,生成识别结果,根据识别结果,将获取的表情特效动画加载至即时视频中的加载位置,实现了在即时视频中进行表情特效加载,满足了用户在视频通话中通过加载表情特效动画进行互动的需求,增加了视频互动形式,提高了用户体验;另一方面,通过在当前视频帧中确定与表情特效动画对应的加载位置后,将表情特效动画加载至即时视频中的加载位置,从而使得表情特效动画在视频中加载更加准确,提高了用户体验;同时,使得加载表情特效动画可以跟随人物头像的移动而不断的识别该位置,使得表情特效动画可以跟随人物的变化而相应的变化,提高了用户体验;另外,通过识别即时视频帧中的人脸表情,生成识别结果,根据识别结果获取所要加载的表情特效动画,进行自动加载表情特效动画,与手动加载的方式相比,简化了操作步骤,提高了用户体验。
实施例四
本发明实施例提供了一种电子设备6,参见图6所示,电子设备6包括:
识别模块61,用于识别即时视频帧中的人脸表情,生成识别结果;
获取模块62,用于根据识别结果,获取所要加载的表情特效动画;
确定模块63,用于确定表情特效动画在即时视频帧中的加载位置;
发送模块64,用于发送表情特效动画和加载位置至其他电子设备。
加载模块65,用于根据加载位置,加载表情特效动画;
显示模块66,用于显示加载表情特效动画后的即时视频。
可选的,识别模块61具体用于:
获取即时视频帧中的人脸细节特征点参数;
根据人脸细节特征点参数,获取当前即时视频帧中的人脸表情。
可选的,获取模块62具体用于:
根据人脸表情,获取与人脸表情对应的表情特效动画。
可选的,获取模块63还具体用于:
根据即时视频帧中的人脸细节特征点参数,获取表情特效动画在即时视频帧中的加载位置。
可选的,
设备还包括获取模块,用于获取用户输入的消除指令;
设备还包括删除模块,用于删除消除指令所指示的特效动画;
发送模块64还用于向其他电子设备发送消除指令。
本发明实施例提供了一种电子设备,通过识别即时视频帧中的人脸表情,生成识别结果,根据识别结果,将获取的表情特效动画加载至即时视频中的加载位置,实现了在即时视频中进行表情特效加载,满足了用户在视频通话中通过加载表情特效动画进行互动的需求,增加了视频互动形式,提高了用户体验;另一方面,通过在当前视频帧中确定与表情特效动画对应的加载位置后,将表情特效动画加载至即时视频中的加载位置,从而使得表情特效动画在视频中加载更加准确,提高了用户体验;同时,使得加载表情特效动画可以跟随人物头像的移动而不断的识别该位置,使得表情特效动画可以跟随人物的变化而相应的变化,提高了用户体验;另外,通过识别即时视频帧中的人脸表情,生成识别结果,根据识别结果获取所要加载的表情特效动画,进行自动加载表情特效 动画,与手动加载的方式相比,简化了操作步骤,提高了用户体验。
实施例五
本发明实施例提供了一种电子设备7,参见图7所示,包括:视频输入模块71、视频输出模块72、发送模块73、接收模块74、存储器75以及与视频输入模块71、视频输出模块72、发送模块73、接收模块74和存储器75连接的处理器76,其中,存储器75存储一组程序代码,处理器76用于调用存储器75中存储的程序代码,执行以下操作:
识别即时视频帧中的人脸表情,生成识别结果;
根据识别结果,获取所要加载的表情特效动画;
确定表情特效动画在即时视频帧中的加载位置;
控制发送模块73发送表情特效动画和加载位置至其他电子设备;
其中,处理器76调用存储器75中存储的程序代码还用于控制视频输入模块71接收即时视频帧。
可选的,处理器76用于调用存储器75中存储的程序代码,执行以下操作:
获取即时视频帧中的人脸细节特征点参数;
根据人脸细节特征点参数,获取当前即时视频帧中的人脸表情。
可选的,处理器76用于调用存储器75中存储的程序代码,执行以下操作:
根据人脸表情,获取与人脸表情对应的表情特效动画。
可选的,处理器76用于调用存储器75中存储的程序代码,执行以下操作:
根据即时视频帧中的人脸细节特征点参数,获取表情特效动画在即时视频帧中的加载位置。
可选的,处理器76用于调用存储器75中存储的程序代码,执行以下操作:
根据加载位置,加载表情特效动画,并控制视频输出模块72显示加载表情特效动画后的即时视频帧。
可选的,处理器76用于调用存储器75中存储的程序代码,执行以下操作:
控制接收模块74获取用户输入的消除指令;
删除消除指令所指示的特效动画;
控制发送模块73向其他电子设备发送消除指令。
本发明实施例提供了一种电子设备,通过识别即时视频帧中的人脸表情,生成识别结果,根据识别结果,将获取的表情特效动画加载至即时视频中的加载位置,实现了在即时视频中进行表情特效加载,满足了用户在视频通话中通过加载表情特效动画进行互动的需求,增加了视频互动形式,提高了用户体验;另一方面,通过在当前视频帧中确定与表情特效动画对应的加载位置后,将表情特效动画加载至即时视频中的加载位置,从而使得表情特效动画在视频中加载更加准确,提高了用户体验;同时,使得加载表情特效动画可以跟随人物头像的移动而不断的识别该位置,使得表情特效动画可以跟随人物的变化而相应的变化,提高了用户体验;另外,通过识别即时视频帧中的人脸表情,生成识别结果,根据识别结果获取所要加载的表情特效动画,进行自动加载表情特效动画,与手动加载的方式相比,简化了操作步骤,提高了用户体验。
实施例六
本发明实施例提供了一种电子设备8,参见图8所示,电子设备6包括:
接收模块81,用于接收其他电子设备发送的表情特效动画和加载位置;
加载模块82,用于将表情特效动画加载至加载位置;
显示模块83,用于显示加载表情特效动画后的即时视频帧;
其中,加载位置是其他电子设备根据当前视频帧中确定后,再发送至本电子设备的,表情特效动画是其他电子设备通过对即时视频帧中的人脸表情进行识别获取后,再发送至本电子设备的。
可选的,电子设备8还包括:
所述接收模块还用于接收其他电子设备发送的消除指令;
所述加载模块还用于删除所述消除指令所指示的特效动画。
本发明实施例提供了一种电子设备,通过识别即时视频帧中的人脸表情,生成识别结果,根据识别结果,将获取的表情特效动画加载至即时视频中的加 载位置,实现了在即时视频中进行表情特效加载,满足了用户在视频通话中通过加载表情特效动画进行互动的需求,增加了视频互动形式,提高了用户体验;另一方面,通过在当前视频帧中确定与表情特效动画对应的加载位置后,将表情特效动画加载至即时视频中的加载位置,从而使得表情特效动画在视频中加载更加准确,提高了用户体验;同时,使得加载表情特效动画可以跟随人物头像的移动而不断的识别该位置,使得表情特效动画可以跟随人物的变化而相应的变化,提高了用户体验;另外,通过识别即时视频帧中的人脸表情,生成识别结果,根据识别结果获取所要加载的表情特效动画,进行自动加载表情特效动画,与手动加载的方式相比,简化了操作步骤,提高了用户体验。
实施例七
本发明实施例提供了一种电子设备,参见图9所示,包括视频输出模块91、发送/接收模块92、存储器93以及与视频输出模块91、发送/接收模块92和存储器93连接的处理器94,其中,存储器93存储一组程序代码,处理器94用于调用存储器93中存储的程序代码,执行以下操作:
接收其他电子设备发送的表情特效动画和加载位置;
将表情特效动画加载至加载位置,并控制视频输出模块91显示加载表情特效动画后的即时视频帧;
其中,加载位置是其他电子设备根据当前视频帧中确定后,再发送至本电子设备的,表情特效动画是其他电子设备通过对即时视频帧中的人脸表情进行识别获取后,再发送至本电子设备的。
可选的,处理器94用于调用存储器93中存储的程序代码,执行以下操作:
接收其他电子设备发送的消除指令;
删除所述消除指令所指示的特效动画。
本发明实施例提供了一种即时视频中的特效加载方法,通过识别即时视频帧中的人脸表情,生成识别结果,根据识别结果,将获取的表情特效动画加载至即时视频中的加载位置,实现了在即时视频中进行表情特效加载,满足了用 户在视频通话中通过加载表情特效动画进行互动的需求,增加了视频互动形式,提高了用户体验;另一方面,通过在当前视频帧中确定与表情特效动画对应的加载位置后,将表情特效动画加载至即时视频中的加载位置,从而使得表情特效动画在视频中加载更加准确,提高了用户体验;同时,使得加载表情特效动画可以跟随人物头像的移动而不断的识别该位置,使得表情特效动画可以跟随人物的变化而相应的变化,提高了用户体验;另外,通过识别即时视频帧中的人脸表情,生成识别结果,根据识别结果获取所要加载的表情特效动画,进行自动加载表情特效动画,与手动加载的方式相比,简化了操作步骤,提高了用户体验。
需要说明的是:上述实施例提供的电子设备在触发即时视频中的表情特效加载方法时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将电子设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的电子设备与即时视频中的表情特效动画加载方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上所述仅为本发明的较佳实施例,并不用以限制本发明,凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。

Claims (10)

  1. 一种即时视频中的表情特效动画加载方法,其特征在于,所述方法包括:
    识别即时视频帧中的人脸表情,生成识别结果;
    根据所述识别结果,获取所要加载的表情特效动画;
    确定所述表情特效动画在即时视频帧中的加载位置;
    发送所述表情特效动画和所述加载位置至其他电子设备。
  2. 根据权利要求1所述的方法,其特征在于,所述识别即时视频帧中的人脸表情包括:
    获取即时视频帧中的人脸细节特征点参数;
    根据所述人脸细节特征点参数,获取当前即时视频帧中的人脸表情。
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述识别结果,获取所要加载的表情特效动画包括:
    根据所述人脸表情,获取与所述人脸表情对应的表情特效动画。
  4. 根据权利要求2或3所述的方法,其特征在于,所述确定所述表情特效动画在即时视频帧中的加载位置包括:
    根据所述即时视频帧中的人脸细节特征点参数,获取所述表情特效动画在即时视频帧中的加载位置。
  5. 根据权利要求4所述的方法,其特征在于,所述方法还包括:
    根据所述加载位置,加载所述表情特效动画,并显示加载表情特效动画后的即时视频帧。
  6. 根据权利要求5所述的方法,其特征在于,所述方法还包括:
    获取用户输入的消除指令;
    删除所述消除指令所指示的特效动画;
    向所述其他电子设备发送所述消除指令。
  7. 一种电子设备,其特征在于,所述电子设备包括:
    识别模块,用于识别即时视频帧中的人脸表情,生成识别结果;
    获取模块,用于根据所述识别结果,获取所要加载的表情特效动画;
    确定模块,用于确定所述表情特效动画在即时视频帧中的加载位置;
    发送模块,用于发送所述表情特效动画和所述加载位置至其他电子设备;
    加载模块,用于根据所述加载位置,加载所述表情特效动画;
    显示模块,用于显示所述加载表情特效动画后的即时视频帧。
  8. 一种即时视频中的表情特效显示方法,其特征在于,所述方法包括:
    接收其他电子设备发送的表情特效动画和加载位置;
    将所述表情特效动画加载至所述加载位置,并显示所述加载表情特效动画后的即时视频帧;
    其中,所述加载位置是在当前视频帧中确定的,所述表情特效动画是通过对即时视频帧中的人脸表情进行识别获取的。
  9. 根据权利要求8所述的方法,其特征在于,所述方法还包括:
    接收所述其他电子设备发送的消除指令;
    删除所述消除指令所指示的特效动画。
  10. 一种电子设备,其特征在于,所述电子设备包括:
    接收模块,用于接收其他电子设备发送的表情特效动画和加载位置;
    加载模块,用于将所述表情特效动画加载至所述加载位置;
    显示模块,用于显示所述加载表情特效动画后的即时视频帧;
    其中,所述加载位置是在当前视频帧中确定的,所述表情特效动画是通过对即时视频帧中的人脸表情进行识别获取的。
PCT/CN2016/079116 2015-04-16 2016-04-13 一种即时视频中的表情特效动画加载方法和电子设备 WO2016165615A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510181435.5 2015-04-16
CN201510181435.5A CN104780339A (zh) 2015-04-16 2015-04-16 一种即时视频中的表情特效动画加载方法和电子设备

Publications (1)

Publication Number Publication Date
WO2016165615A1 true WO2016165615A1 (zh) 2016-10-20

Family

ID=53621551

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/079116 WO2016165615A1 (zh) 2015-04-16 2016-04-13 一种即时视频中的表情特效动画加载方法和电子设备

Country Status (2)

Country Link
CN (1) CN104780339A (zh)
WO (1) WO2016165615A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111656318A (zh) * 2017-11-09 2020-09-11 深圳传音通讯有限公司 一种基于拍照功能的表情的添加方法及添加装置
CN111753784A (zh) * 2020-06-30 2020-10-09 广州酷狗计算机科技有限公司 视频的特效处理方法、装置、终端及存储介质
CN112788275A (zh) * 2020-12-31 2021-05-11 北京字跳网络技术有限公司 视频通话方法、装置、电子设备和存储介质
CN114092608A (zh) * 2021-11-17 2022-02-25 广州博冠信息科技有限公司 表情的处理方法及装置、计算机可读存储介质、电子设备
CN114760492A (zh) * 2022-04-22 2022-07-15 咪咕视讯科技有限公司 直播特效生成方法、装置、系统与计算机可读存储介质
CN115250340A (zh) * 2021-04-26 2022-10-28 海信集团控股股份有限公司 一种mv录制方法和显示设备

Families Citing this family (55)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104780339A (zh) * 2015-04-16 2015-07-15 美国掌赢信息科技有限公司 一种即时视频中的表情特效动画加载方法和电子设备
CN105068748A (zh) * 2015-08-12 2015-11-18 上海影随网络科技有限公司 触屏智能设备的摄像头实时画面中用户界面交互方法
CN105407313A (zh) * 2015-10-28 2016-03-16 掌赢信息科技(上海)有限公司 一种视频通话方法、设备和系统
CN106713811B (zh) 2015-11-17 2019-08-13 腾讯科技(深圳)有限公司 视频通话方法和装置
CN105578110B (zh) * 2015-11-19 2019-03-19 掌赢信息科技(上海)有限公司 一种视频通话方法
CN105451029B (zh) * 2015-12-02 2019-04-02 广州华多网络科技有限公司 一种视频图像的处理方法及装置
CN105812699B (zh) * 2016-03-18 2019-06-25 联想(北京)有限公司 一种生成动态图片方法及电子设备
CN105898182A (zh) * 2016-03-30 2016-08-24 宁波三博电子科技有限公司 一种基于人脸识别的弹幕点歌方法及系统
CN105872442A (zh) * 2016-03-30 2016-08-17 宁波三博电子科技有限公司 一种基于人脸识别的即时弹幕礼物赠送方法及系统
CN105847735A (zh) * 2016-03-30 2016-08-10 宁波三博电子科技有限公司 一种基于人脸识别的即时弹幕视频通信方法及系统
CN105847734A (zh) * 2016-03-30 2016-08-10 宁波三博电子科技有限公司 一种基于人脸识别的视频通信方法及系统
CN107318054A (zh) * 2016-04-26 2017-11-03 富泰华工业(深圳)有限公司 影音自动处理系统及方法
CN106060572A (zh) * 2016-06-08 2016-10-26 乐视控股(北京)有限公司 视频播放方法及装置
CN106331526B (zh) * 2016-08-30 2019-11-15 北京奇艺世纪科技有限公司 一种拼接动画生成、播放方法及装置
CN106373170A (zh) * 2016-08-31 2017-02-01 北京云图微动科技有限公司 一种视频制作方法及装置
CN106331880B (zh) * 2016-09-09 2020-12-04 腾讯科技(深圳)有限公司 一种信息处理方法及系统
CN106778706A (zh) * 2017-02-08 2017-05-31 康梅 一种基于表情识别的实时假面视频展示方法
CN108076370B (zh) * 2017-02-13 2020-11-17 北京市商汤科技开发有限公司 信息传输方法、装置和电子设备
CN106803909A (zh) * 2017-02-21 2017-06-06 腾讯科技(深圳)有限公司 一种视频文件的生成方法及终端
CN107071330A (zh) * 2017-02-28 2017-08-18 维沃移动通信有限公司 一种视频通话互动的方法及移动终端
CN106804007A (zh) * 2017-03-20 2017-06-06 合网络技术(北京)有限公司 一种网络直播中自动匹配特效的方法、系统及设备
CN107071580A (zh) * 2017-03-20 2017-08-18 北京潘达互娱科技有限公司 数据处理方法及装置
CN107124658B (zh) * 2017-05-02 2019-10-11 北京小米移动软件有限公司 视频直播方法及装置
CN107657652A (zh) * 2017-09-11 2018-02-02 广东欧珀移动通信有限公司 图像处理方法和装置
CN107592474A (zh) * 2017-09-14 2018-01-16 光锐恒宇(北京)科技有限公司 一种图像处理方法和装置
CN109509140A (zh) * 2017-09-15 2019-03-22 阿里巴巴集团控股有限公司 显示方法及装置
CN107911643B (zh) * 2017-11-30 2020-10-27 维沃移动通信有限公司 一种视频通信中展现场景特效的方法和装置
CN107992824A (zh) * 2017-11-30 2018-05-04 努比亚技术有限公司 拍照处理方法、移动终端及计算机可读存储介质
CN107948667B (zh) * 2017-12-05 2020-06-30 广州酷狗计算机科技有限公司 在直播视频中添加显示特效的方法和装置
CN109903360A (zh) * 2017-12-08 2019-06-18 浙江舜宇智能光学技术有限公司 三维人脸动画控制系统及其控制方法
CN108200373B (zh) * 2017-12-29 2021-03-26 北京乐蜜科技有限责任公司 图像处理方法、装置、电子设备及介质
CN108307127A (zh) * 2018-01-12 2018-07-20 广州市百果园信息技术有限公司 视频处理方法及计算机存储介质、终端
CN108234825A (zh) * 2018-01-12 2018-06-29 广州市百果园信息技术有限公司 视频处理方法及计算机存储介质、终端
CN108711192A (zh) * 2018-04-10 2018-10-26 光锐恒宇(北京)科技有限公司 一种视频处理方法和装置
US10681310B2 (en) 2018-05-07 2020-06-09 Apple Inc. Modifying video streams with supplemental content for video conferencing
US11012389B2 (en) 2018-05-07 2021-05-18 Apple Inc. Modifying images with supplemental content for messaging
CN108600785B (zh) * 2018-05-10 2021-05-04 闪玩有限公司 视频串流中子程序的同步方法及计算机可读存储介质
CN108597001A (zh) * 2018-05-15 2018-09-28 Oppo广东移动通信有限公司 气氛数据处理方法、装置、存储介质及终端
CN108648251B (zh) * 2018-05-15 2022-05-24 奥比中光科技集团股份有限公司 3d表情制作方法及系统
CN108830917B (zh) * 2018-05-29 2023-04-18 努比亚技术有限公司 一种信息生成方法、终端及计算机可读存储介质
CN110769323B (zh) * 2018-07-27 2021-06-18 Tcl科技集团股份有限公司 一种视频通信方法、系统、装置和终端设备
CN111507142A (zh) * 2019-01-31 2020-08-07 北京字节跳动网络技术有限公司 人脸表情图像处理方法、装置和电子设备
CN111507143B (zh) * 2019-01-31 2023-06-02 北京字节跳动网络技术有限公司 表情图像效果生成方法、装置和电子设备
CN109903359B (zh) * 2019-03-15 2023-05-05 广州市百果园网络科技有限公司 一种粒子的显示方法、装置、移动终端和存储介质
CN109978996B (zh) * 2019-03-28 2021-06-11 北京达佳互联信息技术有限公司 生成表情三维模型的方法、装置、终端及存储介质
CN110475157A (zh) * 2019-07-19 2019-11-19 平安科技(深圳)有限公司 多媒体信息展示方法、装置、计算机设备及存储介质
CN110650306B (zh) * 2019-09-03 2022-04-15 平安科技(深圳)有限公司 视频聊天中添加表情的方法、装置、计算机设备及存储介质
CN110557649B (zh) * 2019-09-12 2021-12-28 广州方硅信息技术有限公司 直播交互方法、直播系统、电子设备及存储介质
CN112887631B (zh) * 2019-11-29 2022-08-12 北京字节跳动网络技术有限公司 在视频中显示对象的方法、装置、电子设备及计算机可读存储介质
CN111031334A (zh) * 2019-12-06 2020-04-17 广州华多网络科技有限公司 文字虚拟礼物内容的推荐方法、装置、设备及存储介质
CN111405307A (zh) * 2020-03-20 2020-07-10 广州华多网络科技有限公司 直播模板配置方法、装置及电子设备
CN111859025A (zh) * 2020-07-03 2020-10-30 广州华多网络科技有限公司 表情指令生成方法、装置、设备及存储介质
CN112422844A (zh) * 2020-09-23 2021-02-26 上海哔哩哔哩科技有限公司 在视频中添加特效的方法、装置、设备及可读存储介质
CN112270733A (zh) * 2020-09-29 2021-01-26 北京五八信息技术有限公司 Ar表情包的生成方法、装置、电子设备及存储介质
CN113163135B (zh) * 2021-04-25 2022-12-16 北京字跳网络技术有限公司 视频的动画添加方法、装置、设备及介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7379071B2 (en) * 2003-10-14 2008-05-27 Microsoft Corporation Geometry-driven feature point-based image synthesis
CN101247482A (zh) * 2007-05-16 2008-08-20 北京思比科微电子技术有限公司 一种实现动态图像处理的方法和装置
CN101287093A (zh) * 2008-05-30 2008-10-15 北京中星微电子有限公司 在视频通信中添加特效的方法及视频客户端
US20130235045A1 (en) * 2012-03-06 2013-09-12 Mixamo, Inc. Systems and methods for creating and distributing modifiable animated video messages
CN104780339A (zh) * 2015-04-16 2015-07-15 美国掌赢信息科技有限公司 一种即时视频中的表情特效动画加载方法和电子设备

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003037826A (ja) * 2001-07-23 2003-02-07 Alpine Electronics Inc 代理画像表示装置およびテレビ電話装置
KR101326651B1 (ko) * 2006-12-19 2013-11-08 엘지전자 주식회사 이모티콘을 이용한 화상통화장치 및 방법
CN201066884Y (zh) * 2007-05-16 2008-05-28 北京思比科微电子技术有限公司 一种实现动态图像处理的装置
KR101533065B1 (ko) * 2008-12-01 2015-07-01 삼성전자주식회사 화상통화 중 애니메이션 효과 제공 방법 및 장치
KR101189053B1 (ko) * 2009-09-05 2012-10-10 에스케이플래닛 주식회사 아바타 기반 화상 통화 방법 및 시스템, 이를 지원하는 단말기
KR20110030223A (ko) * 2009-09-17 2011-03-23 엘지전자 주식회사 이동 단말기 및 그 제어방법
CN102055912B (zh) * 2009-10-29 2014-10-29 北京中星微电子有限公司 一种视频应用系统、视频特效处理系统和方法
CN101877056A (zh) * 2009-12-21 2010-11-03 北京中星微电子有限公司 人脸表情识别方法及系统、表情分类器的训练方法及系统
US20120069028A1 (en) * 2010-09-20 2012-03-22 Yahoo! Inc. Real-time animations of emoticons using facial recognition during a video chat
CN102455898A (zh) * 2010-10-29 2012-05-16 张明 视频聊天卡通表情辅助娱乐系统
KR20120120858A (ko) * 2011-04-25 2012-11-02 강준규 영상통화 서비스 및 그 제공방법, 이를 위한 영상통화서비스 제공서버 및 제공단말기
CN102271241A (zh) * 2011-09-02 2011-12-07 北京邮电大学 一种基于面部表情/动作识别的图像通信方法及系统
KR101862128B1 (ko) * 2012-02-23 2018-05-29 삼성전자 주식회사 얼굴을 포함하는 영상 처리 방법 및 장치
CN103297742A (zh) * 2012-02-27 2013-09-11 联想(北京)有限公司 数据处理方法、微处理器、通信终端及服务器
CN102638658A (zh) * 2012-03-01 2012-08-15 盛乐信息技术(上海)有限公司 音视频编辑方法及系统
CN103369288B (zh) * 2012-03-29 2015-12-16 深圳市腾讯计算机系统有限公司 基于网络视频的即时通讯方法及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7379071B2 (en) * 2003-10-14 2008-05-27 Microsoft Corporation Geometry-driven feature point-based image synthesis
CN101247482A (zh) * 2007-05-16 2008-08-20 北京思比科微电子技术有限公司 一种实现动态图像处理的方法和装置
CN101287093A (zh) * 2008-05-30 2008-10-15 北京中星微电子有限公司 在视频通信中添加特效的方法及视频客户端
US20130235045A1 (en) * 2012-03-06 2013-09-12 Mixamo, Inc. Systems and methods for creating and distributing modifiable animated video messages
CN104780339A (zh) * 2015-04-16 2015-07-15 美国掌赢信息科技有限公司 一种即时视频中的表情特效动画加载方法和电子设备

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111656318A (zh) * 2017-11-09 2020-09-11 深圳传音通讯有限公司 一种基于拍照功能的表情的添加方法及添加装置
CN111753784A (zh) * 2020-06-30 2020-10-09 广州酷狗计算机科技有限公司 视频的特效处理方法、装置、终端及存储介质
CN112788275A (zh) * 2020-12-31 2021-05-11 北京字跳网络技术有限公司 视频通话方法、装置、电子设备和存储介质
CN112788275B (zh) * 2020-12-31 2023-02-24 北京字跳网络技术有限公司 视频通话方法、装置、电子设备和存储介质
CN115250340A (zh) * 2021-04-26 2022-10-28 海信集团控股股份有限公司 一种mv录制方法和显示设备
CN114092608A (zh) * 2021-11-17 2022-02-25 广州博冠信息科技有限公司 表情的处理方法及装置、计算机可读存储介质、电子设备
CN114760492A (zh) * 2022-04-22 2022-07-15 咪咕视讯科技有限公司 直播特效生成方法、装置、系统与计算机可读存储介质
CN114760492B (zh) * 2022-04-22 2023-10-20 咪咕视讯科技有限公司 直播特效生成方法、装置、系统与计算机可读存储介质

Also Published As

Publication number Publication date
CN104780339A (zh) 2015-07-15

Similar Documents

Publication Publication Date Title
WO2016165615A1 (zh) 一种即时视频中的表情特效动画加载方法和电子设备
US11557075B2 (en) Body pose estimation
US10097492B2 (en) Storage medium, communication terminal, and display method for enabling users to exchange messages
CN107247548B (zh) 图像显示方法、图像处理方法及装置
US11036989B1 (en) Skeletal tracking using previous frames
KR102506738B1 (ko) 눈 텍스처 인페인팅
US11508087B2 (en) Texture-based pose validation
KR101768532B1 (ko) 증강 현실을 이용한 화상 통화 시스템 및 방법
CN113420719A (zh) 生成动作捕捉数据的方法、装置、电子设备以及存储介质
KR20230113370A (ko) 얼굴 애니메이션 합성
TW202009682A (zh) 基於擴增實境的互動方法及裝置
KR20240066263A (ko) 얼굴 표정들에 기초하여 대화형 패션을 제어함
KR20230044213A (ko) 관절형 애니메이션을 위한 모션 표현들
CN114187392A (zh) 虚拟偶像的生成方法、装置和电子设备
US11973730B2 (en) External messaging function for an interaction system
WO2023220163A1 (en) Multi-modal human interaction controlled augmented reality
US20230260127A1 (en) Interactively defining an object segmentation
US20230199147A1 (en) Avatar call platform
CN113327311B (zh) 基于虚拟角色的显示方法、装置、设备、存储介质
KR20230157494A (ko) 실시간에서의 실제 크기 안경류
KR20230124689A (ko) 메시징 시스템 내에서의 비디오 트리밍
US11894989B2 (en) Augmented reality experience event metrics system
US20230343037A1 (en) Persisting augmented reality experiences
US20230386144A1 (en) Automated augmented reality experience creation system
CN114968523A (zh) 不同场景间的人物传送方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16779590

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16779590

Country of ref document: EP

Kind code of ref document: A1