WO2019114528A1 - 动画实现方法、终端及存储介质 - Google Patents

动画实现方法、终端及存储介质 Download PDF

Info

Publication number
WO2019114528A1
WO2019114528A1 PCT/CN2018/117278 CN2018117278W WO2019114528A1 WO 2019114528 A1 WO2019114528 A1 WO 2019114528A1 CN 2018117278 W CN2018117278 W CN 2018117278W WO 2019114528 A1 WO2019114528 A1 WO 2019114528A1
Authority
WO
WIPO (PCT)
Prior art keywords
animation
video window
motion
information
image
Prior art date
Application number
PCT/CN2018/117278
Other languages
English (en)
French (fr)
Inventor
潘文婷
宁彬泉
成平
曹超利
秦小龙
余帆
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to EP18888608.9A priority Critical patent/EP3726843B1/en
Publication of WO2019114528A1 publication Critical patent/WO2019114528A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8146Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/04Real-time or near real-time messaging, e.g. instant messaging [IM]

Definitions

  • the present application relates to the field of video technologies, and in particular, to an animation implementation method, a terminal, and a storage medium.
  • the current live video products support the multiplayer live video mode.
  • the terminal can display the video images of all chat users in the live video interface, so that the user can see the video images of themselves and other live broadcast users.
  • some video live broadcast products have also added video interactive functions, such as adding animation and other animation effects, so that many other users can see their own video images with expressions.
  • the terminal may add a corresponding expression to the user's own video screen, and then send the video image with the added expression to the video receiver terminal for display through the network, thereby realizing the synchronization of the animation effect.
  • the video interaction scheme needs to send the video image with the expression to the terminal of other video chat users through the network, and there is a problem of wasting network resources.
  • An animation implementation method, terminal, and storage medium are provided in accordance with various embodiments of the present application.
  • An animation implementation method is performed by a terminal, where the terminal includes a memory and a processor, and the method includes: performing motion detection on an object in each video window on the video interaction interface;
  • the animation information and the motion direction of the animation are determined according to the motion information
  • the end point video window is animated according to the animation information.
  • a terminal comprising a memory and a processor, the memory storing computer readable instructions, the computer readable instructions being executed by the processor such that the processor performs the following steps:
  • the animation information and the motion direction of the animation are determined according to the motion information
  • the end point video window is animated according to the animation information.
  • a non-transitory computer readable storage medium storing computer readable instructions, when executed by one or more processors, causes the one or more processors to perform the following steps:
  • the animation information and the motion direction of the animation are determined according to the motion information
  • the end point video window is animated according to the animation information.
  • FIG. 1 is a schematic diagram of a scenario of a video interaction system according to an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of an animation implementation method provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a first animation provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a second animation provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a third animation provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a fourth animation provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a window determination area provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of determining an endpoint window provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of another endpoint window determination according to an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a window shake animation provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram of a beating animation provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of a fifth animation provided by an embodiment of the present application.
  • FIG. 13 is a sixth schematic diagram of an animation provided by an embodiment of the present application.
  • FIG. 14 is a schematic diagram of a seventh animation provided by an embodiment of the present application.
  • 15 is a schematic diagram of an eighth animation provided by an embodiment of the present application.
  • 16 is a schematic diagram of a ninth animation provided by an embodiment of the present application.
  • FIG. 17 is another schematic flowchart of an animation implementation method provided by an embodiment of the present application.
  • FIG. 18 is a schematic diagram of a framework of an animation implementation method provided by an embodiment of the present application.
  • FIG. 19 is a schematic diagram of a tenth animation provided by an embodiment of the present application.
  • 20 is a schematic diagram of an eleventh animation provided by an embodiment of the present application.
  • 21 is a schematic diagram of a twelfth animation provided by an embodiment of the present application.
  • 22 is a schematic diagram of a thirteenth animation provided by an embodiment of the present application.
  • FIG. 23 is a first structural diagram of an animation implementation apparatus according to an embodiment of the present application.
  • FIG. 24 is a second schematic structural diagram of an animation implementation apparatus according to an embodiment of the present application.
  • 25 is a schematic diagram of a third structure of an animation implementation apparatus according to an embodiment of the present application.
  • FIG. 26 is a schematic structural diagram of a terminal according to an embodiment of the present application.
  • the embodiment of the present application provides a video interaction system, which includes an animation implementation device provided by any one of the embodiments of the present application.
  • the animation implementation device can be integrated into a terminal, and the terminal is a device such as a mobile phone or a tablet computer.
  • the video interaction system may also include other devices, such as servers.
  • the server is used to forward the video uploaded by the terminal.
  • an embodiment of the present application provides a video interaction system, including: a terminal 10, a server 20, a terminal 40, and a terminal 50.
  • the terminal 10 and the server 20 are connected through a network 30, and the terminal 40 and the server 20 are connected through a network 60.
  • the terminal 50 is connected to the server 20 via a network 70.
  • the network 30, the network 60, and the network 70 include network entities such as routers and gateways, which are illustrated in the figure.
  • the terminal 10, the terminal 40, and the terminal 50 can perform message interaction with the server 20 through a wired network or a wireless network, for example, an application (such as a live video application) and/or an application update data package and/or application-related data can be downloaded from the server 20.
  • Information or business information is included in the server 20.
  • the terminal 10 can be a mobile phone, a tablet computer, a notebook computer, and the like.
  • FIG. 1 is an example in which the terminal 10 is a mobile phone.
  • the terminal 10 can be installed with various applications required by the user, for example, an application with entertainment functions (such as a live video application, an audio playback application, a reading software), and an application with a service function (such as a map navigation application, group purchase). Application, etc.).
  • an application with entertainment functions such as a live video application, an audio playback application, a reading software
  • an application with a service function such as a map navigation application, group purchase). Application, etc.
  • the video broadcast application is used as an example, and the terminal 10 can download the video live application and/or the live video application update data package and/or the video live application from the server 20 according to the requirement.
  • Data information or business information (such as video data, etc.).
  • the terminal 10 when the terminal 10 performs video interaction with the terminal 40 and the terminal 50, the terminal 10 can perform motion detection on an object in each video window on the video interaction interface; when detecting motion information of the object in the target video window
  • the animation information and the motion direction of the animation are determined according to the motion information; the end video window is determined from each video window on the video interaction interface according to the motion direction of the animation; and the animation is displayed for the end video window according to the animation information.
  • the terminal 40 and the terminal 50 perform the same operations as the terminal 10.
  • the terminal 10, the terminal 40, and the terminal 50 respectively display the same animation effect on the video interaction interface, that is, animating the end video window.
  • FIG. 1 is only an example of a system architecture that implements the embodiment of the present application.
  • the embodiment of the present application is not limited to the system structure shown in FIG. 1 above, and various embodiments of the present application are proposed based on the system architecture.
  • an animation implementation method which can be executed by a processor of a terminal. As shown in FIG. 2, the specific process of the animation implementation method is as follows:
  • Step 201 Perform motion detection on an object in each video window on the video interaction interface.
  • the video interaction interface includes a plurality of video windows, and each video window displays a corresponding video image.
  • the video interaction interface can be a multi-person video interaction interface, that is, a video interaction interface when multiple users perform video interaction.
  • the multi-person video interaction interface includes multiple video windows, and each video window displays a video picture of the corresponding user.
  • the object in the video window is an object in the video screen displayed by the video window.
  • the object is the object body of the video picture, and the object body can be set according to actual needs.
  • the object body can be a pet such as a cat or a dog, a person, or the like.
  • an object in a video window can be a user in a video screen displayed by a video window.
  • the action of the object may include a change in the position of the object in the video picture.
  • the action of the object may be a change of the position of the user in the video picture, including the user's head, hand, face, body, The position of the foot and the like changes.
  • the position change or action of the hand can be referred to as a gesture.
  • the position change or action of the user's entire body may be referred to as a gesture, and the positional change of the face, or the action, is the user's expression.
  • the terminal may obtain video data of a multi-person live member from the server through a network, and then display video data of the corresponding member in a corresponding video window in the video interaction interface.
  • the terminal may display four video windows in the video interaction interface, that is, video windows a, b, c, and d, and each video window displays a video of the corresponding user. Picture.
  • the terminal can perform motion detection on the users in the video screens of the video windows a, b, c, and d, respectively.
  • Step 202 When the motion information of the object in the target video window is detected, the animation information and the motion direction of the animation are determined according to the motion information.
  • the target video window is any video window on the video interaction interface, or a video window in which the object performs an action. For example, when an object in a video window starts to act, the video window is the target video window.
  • the action information of the object may be position change information of the object.
  • the action information may include action information of each part of the user, such as hand motion information, head motion information, leg motion information, and the like. .
  • the action information may include an action type and/or a motion direction; at this time, corresponding animation information may be determined according to the action type, and the animation motion direction is determined according to the motion direction. That is, the step of "determining the animation information according to the motion information and the motion direction of the animation" may include determining corresponding animation information according to the motion type, and determining a corresponding animation motion direction according to the motion direction.
  • the manner of determining the animation information based on the action type may be various.
  • the mapping relationship between the action type and the animation information may be preset, and after the motion information is detected, the mapping relationship may be acquired.
  • Corresponding animation information may be various.
  • the motion direction may be directly used as an animation motion direction, or the reverse direction of the motion direction may be used as an animation motion direction, or a vertical direction perpendicular to the motion direction may be used as an animation motion direction or the like.
  • the action type can be divided according to the actual needs.
  • the hand movement can be divided into: drawing love, throwing things, making fists, archery, tapping, pulling, scissors gestures, Ok gestures, and the like.
  • the head motion can be divided into: shaking head, nodding head, tilting head, and the like.
  • body movements can be divided into: swinging left and right, swinging up and down, tilting the body, and the like.
  • the direction of motion is the direction of motion or the direction of motion of the motion of the object.
  • the action direction can be the right direction and the like.
  • the animation information may include: an animation trigger position, an animation type, an animation duration, and the like, and animation related information.
  • the animation type can be divided according to actual needs.
  • the animation type can be divided into: video window deformation (such as video window jitter, deformation, etc.), video animation (such as video window display), video window display motion Animations, animations in video windows, movements to other video windows, and more.
  • animation types can also be divided into: heart-shaped animation, bomb animation, bow and arrow animation, and so on.
  • the motion information after detecting the motion information, it may be determined whether the motion information meets the preset animation trigger condition, and if so, the motion information and the motion motion direction are determined according to the motion information.
  • the preset animation trigger condition is an action condition for triggering the animation, and can be set according to actual needs.
  • the preset animation trigger condition may include the action type being the preset action type.
  • the preset animation triggering condition may include: the user's hand motion is a preset hand motion type, that is, the user's gesture is a preset gesture, that is, a specific gesture; and the user's gesture is a predetermined pattern ( A gesture such as a heart shape, a circle, or the like, a gesture of punctuating a thing, a gesture of a user, a fist gesture, and the like.
  • the preset animation triggering condition may further include: the user's head motion type is a preset head motion type, that is, the head motion is a specific head motion, such as a user shaking a head, etc.; the user's body motion is a specific body motion, That is, the user's posture is a specific posture, for example, the user twists his waist and the like.
  • the preset animation triggering condition may further include: the facial expression type of the user is a preset expression type, that is, the facial expression is a specific expression, for example, the user is angry, open mouth, happy laughter, and the like.
  • the detected action information of the object may include: an action type of the object, position information of the action of the object in the video window, and action direction information of the object; at this time, the step “determines the animation information according to the action information and
  • the animation motion direction may include: determining a corresponding animation trigger position in the target video window according to the position information; determining an animation motion direction according to the motion direction information; and determining an animation type to be triggered according to the action type.
  • the mapping relationship between the action type and the animation type may be preset. After the motion information is detected, the type of the animation that needs to be triggered may be determined based on the current action type and the mapping relationship.
  • the animation trigger position when detecting that the user makes a gesture of squatting in the video window a, the animation trigger position may be determined according to the gesture position, the animation motion direction a' is determined according to the gesture motion direction, and the animation type is determined according to the action type " ⁇ something". "Throw the bomb", at this time, the bomb image can be displayed in the video window a according to the animation trigger position, and the bomb image is controlled to move in accordance with the animation moving direction a'.
  • Step 203 Determine an end point video window from each video window on the video interaction interface according to the animation motion direction.
  • the end video window is a video window that needs to be finally animated.
  • the end video window may be the video window finally reached by the image;
  • the animation is a video window deformation, the terminal video window is a video window that needs to be deformed;
  • the animation is an image motion, the end video
  • the window can be a video window that needs to display an image and the image moves.
  • the manner of determining the end video window based on the moving direction of the animation may be various.
  • the window judging area of the video window may be set in advance in the video interactive interface, and then, based on The animation motion direction and the window determination area of each video window determine the end video window.
  • the step of “determining the end video window of the image motion according to the motion direction” may include: determining a window determination area of the candidate video window, where the candidate video window is a video window other than the target video window in the video interaction interface;
  • the animation motion direction draws a corresponding line on the video interaction interface; determines a target window determination area that the line priority contacts; and selects a candidate video window corresponding to the target window determination area as an end point video window.
  • the window determination area is an area for determining the end video window in the video interaction interface.
  • the video interaction interface includes a window determination area corresponding to each video window.
  • the sum of the areas of all window decision regions is equal to the sum of all video window areas.
  • the window determination area of the video window may be divided according to actual needs, and the window determination area of the video window includes at least a part of the area of the video window.
  • the video window a is the target video window.
  • the window determination area of the video window b is the area 701
  • the window determination area of the video window c is the area 702
  • the window determination area of the video window d is the area 703.
  • Area 701 contains a portion of window b, which contains the entire video window d, as well as a portion of video windows b, c.
  • the video window that arrives preferentially is the video window that the image arrives first.
  • the implementation manner of determining the target window determining region when the image is moved according to the moving direction of the animation may include: drawing a straight line on the video interactive interface according to the moving direction of the animation, and determining that the first determining window of the straight line is the target of the image priority arrival The window judges the area.
  • the image 700 moves according to the motion direction a"
  • a line can be drawn on the video interaction interface according to the motion direction a”. It can be seen that the line first contacts the window determination area 701 of the video window b, and the video window b is the end point video window that the image reaches.
  • the motion end position of the image motion is determined to be the boundary position of the target video window.
  • the end position of the motion may be the boundary position of the target video window that is reached when the image moves in the direction of the animation motion.
  • Step 204 Perform animation display on the end video window according to the animation information.
  • the way animation can be displayed can include a variety of, as follows:
  • the step "animating the end point video window according to the animation information” includes: deforming the end point video window according to the animation information.
  • the deformation of the video window may include: a shape change of the video window, a position change, a background change, and the like.
  • the change in position may include: video window jitter, video window rotation, video window jitter, and the like.
  • the animation information may include a video window deformation type, and at this time, the deformation may be performed according to the window deformation type corresponding to the end video window.
  • the action information of the "poke” (such as the action type and the action direction) can be detected, and the animation is determined based on the action information.
  • the motion direction c' and the window deformation type (if the window is shaken), and the end point video window is determined to be the video window d according to the animation motion direction c', then the video window d jitter is controlled at this time.
  • the action information of the "blow” (such as the action type and the action direction) can be detected, based on the action information.
  • Determine the animation motion direction and the window deformation type (such as window jump), and determine the end video window as the video window d according to the motion direction of the animation.
  • the video window d will be controlled to jump.
  • the motion information of "rotation” (such as the action type and the action direction) can be detected, and the motion direction of the animation is determined based on the motion information.
  • the window deformation type (such as window rotation), according to the animation motion direction to determine the end video window is the video window b, then at this time, the video window b will be controlled to rotate.
  • the step "Animating the end video window according to the animation information” includes displaying an image corresponding to the animation information in the end video window and controlling the image to move within the end video window.
  • the animation information may include an image type of the animated subject image.
  • an image corresponding to the image type may be displayed in the end video window, and the image is controlled to move within the end video window.
  • the image type may be divided according to actual needs, and may include, for example, a hammer image, a shovel image, a sword image, a flame image, and the like.
  • the action information (such as the action type and the action direction) of the "beating” can be detected, and is determined based on the action information.
  • the animation motion direction d' and the type of the animation body image (such as "hammer), according to the animation motion direction d', determine the end video window as the video window d, then at this time, the video window d displays a "hammer” image, and controls The "hammer” image moves in the video window d, such as non-stop tapping.
  • the animation trigger position may also be determined based on the position information of the object in the end video window, and then the image corresponding to the animation information is displayed according to the animation trigger position, and the image is controlled to move within the end video window.
  • the action frequency of the object in the target video window may also be acquired, and the image is controlled to move in the end video window based on the action frequency.
  • the action frequency of the “beating” in the frequency window c may be acquired, and during the animation display, Based on the action frequency, the motion frequency of the image of the image in the end video window is controlled.
  • the step "animating the end point video window according to the animation information” includes: displaying the corresponding image in the target video window according to the animation information; controlling the image to move toward the target end position of the end video window according to the animation moving direction; The end position of the motion is determined in the window, and the target end position is updated to the end position of the motion.
  • the displayed image may be a static image or a dynamic image, such as a dynamic expression, a motion map, such as a dynamic heart shape, a dynamic bow, a dynamic fist, a dynamic kiss, and the like.
  • a dynamic image such as a dynamic expression
  • a motion map such as a dynamic heart shape, a dynamic bow, a dynamic fist, a dynamic kiss, and the like.
  • the type of the image can be divided according to actual needs, such as dividing into expression images, shooting images, funny images, and the like.
  • the animation information may include a class of the animation body image, and the type of the image may correspond to the action type of the object. Specifically, the image type of the image to be displayed may be determined according to the action type; and the corresponding position in the target video window according to the image type. The corresponding image is displayed.
  • the animation information may also include an animation trigger position, and at this time, the corresponding image may be displayed in the target video window according to the animation trigger position.
  • the end position of the motion is the actual end position of the image motion, and the motion stops when the image moves to the end position of the motion.
  • the target end position can be set according to actual needs, for example, any position within the end video window.
  • the center position of the end video window can be selected as the end position of the motion, or other positions in the video window of the terminal can be selected as the end position of the motion.
  • the animation trigger position when detecting that the user makes a gesture of squatting in the video window a, the animation trigger position may be determined according to the gesture position, and the animation motion direction a' is determined according to the gesture motion direction, according to the action type " ⁇ stuff" Determine the animation type "throw bomb".
  • the direction a' moves toward the target end position of the end point video window b.
  • the end position of the motion such as the center position of the video window b can be determined in the end video window, and at this time, the update target end position is the end position of the motion.
  • the bomb image it is possible to control the bomb image to move to the center position of the video window b, and stop moving when moving to the position.
  • the target sub-image when the image includes a plurality of sub-images, the target sub-image may be controlled to move toward the target end position in accordance with the image motion direction on the video interaction interface.
  • the arrow image when the image is a bow and arrow image, the arrow image can be controlled to move toward the target end position according to the moving direction of the image.
  • the image after the image is displayed, the image may be adjusted according to the current animation moving direction, so that the image is more closely related to the object motion; and the image motion triggering condition may also be set to make the image effect more accurate.
  • the step “controls the movement of the image according to the moving direction of the animation to the target end position of the end video window” includes: rotating the image according to the moving direction of the animation; and when the current motion information of the object in the target video window satisfies the preset image motion triggering condition, The rotated image is controlled to move toward the target end position of the end video window in accordance with the moving direction of the animation.
  • the embodiment of the present application may continue to perform motion detection on the object in the target video window after the motion information meets the preset image triggering condition, and trigger the image motion when the subsequently detected motion information satisfies certain conditions.
  • the preset image motion trigger condition may trigger the condition of the image motion, and may be set according to actual needs.
  • the preset image triggering condition includes the user making a first specific gesture
  • the preset image motion triggering condition may include the user making the second specific gesture.
  • the preset image motion trigger condition may be associated with a preset image trigger condition; for example, the second specific gesture and the first specific gesture are consecutive gestures or associated gestures.
  • the motion end position may be determined based on the motion information of the object within the end video window. That is, the step of "determining the corresponding end position of the motion in the end video window" may include: detecting an action of the object in the end video window; and detecting that the motion information of the object in the end video window satisfies the preset image receiving condition When the motion information of the object in the end video window is determined, the motion end position of the image motion is determined within the target video window.
  • the object in the terminal video window is an object in the video screen displayed by the end video window.
  • the object is the object body of the video picture, and the object body can be set according to actual needs.
  • the object body can be a pet such as a cat or a dog, a person, or the like.
  • the action information of the object in the end video window may include location information of the action of the object in the end video window.
  • the embodiment of the present application may determine the motion end position of the image in the end video window based on the location information.
  • the preset image receiving condition is an action condition for triggering image receiving, and the receiving condition may be set according to actual needs.
  • the preset image receiving condition may include: the user's hand motion is a preset hand motion, that is, the user's gesture is a preset gesture, that is, a specific gesture.
  • the preset image receiving condition may correspond to or be related to a preset image triggering condition or a displayed image type.
  • the preset image receiving condition may include an action of the user's action for holding both hands (such as a heart-shaped image, a circular image, etc.); for example, when the image is a bow and arrow image, the preset The image receiving condition may include: the action of the user is a middle arrow action (such as a head tilt, etc.); when the image is a bomb image, the preset image receiving condition may include: the user's action is a bullet action (such as rubbing eyes with both hands) .
  • the object may also be tracked in real time, the motion direction of the animation is determined by the tracked motion information, and the initially determined motion direction of the animation is updated.
  • the method of the embodiment may include: performing real-time motion on the object in the target video window when the motion information of the object in the target video window is detected. Tracking; and updating the direction of animation motion based on the tracked motion information.
  • the animation information and the motion direction of the animation may be determined according to the motion information, and the real-time motion tracking of the object in the target video window may be performed. Then, the animation motion direction is updated based on the tracked motion information.
  • the preset animation trigger condition such as the action type is the preset motion type, etc.
  • the tracked motion information may include direction information of the motion of the object motion, motion trend information of the object motion, and the like. For example, the trend of movement of user gestures, etc.
  • real-time motion tracking can be performed on the action, and the animation information (such as the animation subject image type, the animation trigger position, etc.) is determined according to the first motion information, when the video window
  • the motion direction of the animation may be determined again based on the motion information traced to the second motion, and the motion direction of the animation may be updated.
  • the user when detecting that the user draws a heart-shaped gesture in the video window a, the user can perform real-time motion tracking, and determine animation information in the video window a according to the heart-shaped gesture.
  • the main image type, the animation trigger position, and the animation moving direction and display an image such as a heart-shaped image at a corresponding position of the video window a; referring to FIG.
  • the heart-shaped image is rotated at a certain angle according to the moving direction a", and when the user is detected to hit the image to the right, the rotated heart-shaped image can be controlled according to Movement direction a" movement.
  • the action of the user in the video window b can be detected.
  • the action of the user is an action of holding the two hands
  • the corresponding action can be determined in the video window b based on the motion information of the user.
  • the end position of the movement is F'.
  • the target end position F is updated to the motion end position F'.
  • the cardioid image will move to the end position F' of the motion in accordance with the direction of motion a", at which point the motion of the image ends.
  • a matching image matching the image may be displayed in the end video window according to the end position of the motion.
  • a matching image matching the image may be displayed in the end video window according to the end position of the motion.
  • FIG. 6 when the image is a bomb, when the bomb reaches the end position of the motion in the video window b, an explosion image matching the bomb image can be displayed in the video window b according to the end position of the motion.
  • the control image when the image is moved to the target end position, stays at the target end position for a preset duration; during the image stay, the image is controlled to move according to the current motion information of the object within the end video window.
  • the image when the image reaches the target end position, the image stays at the position for 20 s. During the stay, the image can be moved according to the user action in the end video window, that is, the image follows the user motion.
  • the image disappears when the image reaches the position. That is, the image disappears after flying out of the video window.
  • the motion detection of the object in each video window on the video interaction interface is performed; when the motion information of the object in the target video window is detected, the animation information and the motion direction of the animation are determined according to the motion information; The animation motion direction determines the end video window from each video window on the video interaction interface; the animation is displayed for the end video window according to the animation information.
  • the solution can realize cross-window animation on the video interaction interface based on the action information of the object in the video window; for each interaction party of the multi-person video interaction, each video interaction party terminal only needs to perform the same operation, that is, the video window.
  • the motion information detection of the internal object and the corresponding cross-window based on the detection result can realize the animation effect synchronization in the multi-person video chat or the multi-person video live broadcast, without transmitting the animation image to other video chat or live video broadcast.
  • User terminals therefore, can save network resources and improve the synchronization of animation effects.
  • the embodiment of the present application can implement cross-window animation on the video interaction interface according to the action of the user in the video, and present different cross-window animation feedback, so that users of different video windows can generate an interactive experience more similar to the real world, and weaken the real space. The sense of distance greatly enhances the interaction of video interaction.
  • the embodiment of the present application provides an animation implementation system, including a terminal and a server. Referring to FIG. 1, the terminal and the server are connected through a network.
  • the animation implementation method of the present application will be further described below based on the animation implementation system shown above, taking a cross-window animation as an example.
  • an animation implementation method can be as follows:
  • Step 1701 The terminal displays a video screen of the corresponding user in each video window on the video interaction interface.
  • the video interaction interface includes multiple video windows, and each video window displays a video picture of the corresponding user.
  • the video screens of the corresponding users can be respectively displayed in the 4 video windows on the video interaction interface.
  • the video window a displays the video screen of the user A
  • the video window b displays the video screen of the user B
  • the video window c displays the video screen of the user C
  • the video window d displays the video screen of the user D.
  • Step 1702 The terminal detects the motion of the user in the video screen of each video window.
  • the terminal can perform motion detection on the user in the video screen of the video windows a, b, c, and d, respectively.
  • Step 1703 When it is detected that the motion information of the user in the target video window satisfies the preset animation trigger condition, the animation information is determined according to the motion information, and the user in the target video window is tracked in real time.
  • the target video window is any video window on the video interaction interface, or a video window in which the object performs an action. For example, when an object in a video window starts to act, the video window is the target video window.
  • the action information may include action type, action direction, and action position information; at this time, corresponding animation information may be determined according to the action type, and the motion direction of the animation may be determined according to the action direction.
  • the terminal determines an animation motion direction according to the motion direction, determines an image type of the animation body image according to the motion type, and determines a corresponding image trigger position in the target video window according to the motion position information.
  • the action type can be divided according to the actual needs.
  • the hand movement can be divided into: drawing love, throwing things, making fists, archery, tapping, pulling, scissors gestures, Ok gestures, and the like.
  • the user D in the video window d can perform real-time motion tracking.
  • the preset animation trigger condition is an action condition for triggering the animation, and can be set according to actual needs.
  • the preset animation trigger condition may include the action type being the preset action type.
  • the preset animation triggering condition may include: the user's hand motion is a preset hand motion type, that is, the user's gesture is a preset gesture, that is, a specific gesture; and the user's gesture is a predetermined pattern ( A gesture such as a heart shape, a circle, or the like, a gesture of punctuating a thing, a gesture of a user, a fist gesture, and the like.
  • the preset animation trigger condition is an action condition for triggering the animation, and can be set according to actual needs.
  • the preset animation trigger condition may include the action type being the preset action type.
  • the preset animation triggering condition may include: the user's hand motion is a preset hand motion type, that is, the user's gesture is a preset gesture, that is, a specific gesture; and the user's gesture is a predetermined pattern ( A gesture such as a heart shape, a circle, or the like, a gesture of punctuating a thing, a gesture of a user, a fist gesture, and the like.
  • the image may be a static image or a dynamic image
  • the dynamic image may include a dynamic expression and a motion map.
  • a dynamic expression and a motion map Such as dynamic heart shape, dynamic bows, dynamic fists, dynamic kisses and so on.
  • the type of image can be divided according to actual needs, such as according to the function division can be divided into: express love type (such as heart-shaped image, air kiss image, etc.), shooting type (such as bow and arrow image, firearms image, etc.), funny type (such as Expressions, etc.)
  • express love type such as heart-shaped image, air kiss image, etc.
  • shooting type such as bow and arrow image, firearms image, etc.
  • funny type such as Expressions, etc.
  • the image trigger position, the image type, and the initial animation motion direction are determined based on the motion information, such as the image type is a shooting image, and the user in the video window d is simultaneously D performs real-time motion tracking.
  • Step 1704 The terminal displays a corresponding image in the target video window according to the animation information.
  • the terminal displays a corresponding image at the image trigger position according to the determined image type.
  • the image trigger position and the image type are determined based on the motion information, such as the image type is a shooting type image.
  • a bow image can be displayed at the position of the fist in the video window d.
  • Step 1705 The terminal updates the motion direction of the animation according to the motion information tracked to the user.
  • the tracked motion information may include direction information of the object motion, motion trend information of the object, and the like. For example, the trend of movement of user gestures, etc.
  • the user's motion can also be tracked in real time, and the animation motion direction is updated based on the tracked information.
  • Step 1706 The terminal determines an end video window from each video window on the video interaction interface according to the updated animation motion direction.
  • the end video window is the video window where the image finally arrives.
  • the end video window can be determined as the video window a according to the direction of motion.
  • the image stops moving.
  • the manner of determining the end video window based on the moving direction of the animation may be various.
  • the window determining area of the video window may be set in advance in the video interactive interface, and then based on the animation.
  • the direction of motion and the window determination area of each video window determine the endpoint video window.
  • the end video window determining process can refer to the description of the above embodiment.
  • Step 1707 The terminal control image moves to the target end position of the end video window according to the moving direction of the animation.
  • a sub-image may be selected to move to the target end position according to the image moving direction on the video interaction interface.
  • the target end position may be any position in the video interaction interface in the moving direction of the image; the target end position may be set according to actual needs.
  • the target end position value may be empty.
  • the image motion trigger condition may also be set to make the image effect more accurate; for example, when the current motion information of the object meets the preset image motion trigger condition in the target video window, the control image is in the video interaction according to the image motion direction. Move to the target end position on the interface.
  • the preset image motion trigger condition may trigger the condition of the image motion, and may be set according to actual needs.
  • the preset image triggering condition includes the user making a first specific gesture
  • the preset image motion triggering condition may include the user making the second specific gesture.
  • the preset image motion trigger condition may be associated with a preset image trigger condition; for example, the second specific gesture and the first specific gesture are consecutive gestures or associated gestures.
  • Step 1708 The terminal determines a motion end position in the end video window, and updates the target end position to the motion end position.
  • the action of the object in the end video window may be detected; when the action information of the object in the end video window is detected to satisfy the preset image receiving condition, according to the action information of the object in the end video window, the target video The end position of the motion of the image motion is determined within the window, and the target end position is updated to the end position of the motion.
  • the action information of the object in the end video window may include location information of the action of the object in the end video window.
  • the embodiment of the present application may determine the motion end position of the image in the end video window based on the location information.
  • the preset image receiving condition may correspond to or be related to the displayed image type.
  • the preset image receiving condition may include an action of the user's action for holding both hands (such as a heart-shaped image, a circular image, etc.); for example, when the image is a bow and arrow image, the preset The image receiving condition may include: the action of the user is a middle arrow action (such as a head tilt, etc.); when the image is a bomb image, the preset image receiving condition may include: the user's action is a bullet action (such as rubbing eyes with both hands) .
  • the user A in the video window a can perform motion detection.
  • the motion can be determined based on the motion of the user A. End position.
  • the motion end position can be determined based on the position information of the middle arrow motion in the window a, and then the target end position is updated to the motion end position.
  • the arrow will hit the face of User A.
  • the control image when the image is moved to the target end position, stays at the target end position for a preset duration; during the image stay, the image is controlled to move according to the current motion information of the object within the end video window.
  • the preset duration is 20s, and the like. And during the stay, the arrow will follow the movement of the user A's face.
  • a matching image matching the image may be displayed in the end video window according to the end position of the motion. For example, referring to FIG. 6c, when the arrow hits the face of the user A, the corresponding position of the face of the user A displays a bleeding image.
  • the motion detection of the object in each video window on the video interaction interface is performed; when the motion information of the object in the target video window is detected, the animation information and the motion direction of the animation are determined according to the motion information; The animation motion direction determines the end video window from each video window on the video interaction interface; the animation is displayed for the end video window according to the animation information.
  • the solution can realize cross-window animation on the video interaction interface based on the action information of the object in the video window; for each interaction party of the multi-person video interaction, each video interaction party terminal only needs to perform the same operation, that is, the video window.
  • the motion information detection of the internal object and the corresponding cross-window animation based on the detection result can realize the animation effect synchronization in the multi-person video chat or the multi-person video live broadcast, without transmitting the animation image to other video chat or video. Live user terminals, therefore, can save network resources and improve the synchronization of animation effects.
  • the embodiment of the present application can implement cross-window animation on the video interaction interface according to the action of the user in the video, and present different cross-window animation feedback, so that users of different video windows can generate an interactive experience more similar to the real world, and weaken the real space. The sense of distance greatly enhances the interaction of video interaction.
  • the embodiment of the present application further provides an animation implementation apparatus.
  • the animation implementation apparatus may include: a detection unit 2301, an information determination 2302, a window determination unit 2303, and an animation display unit 2304, as follows. :
  • the detecting unit 2301 is configured to perform motion detection on an object in each video window on the video interaction interface.
  • the information determining unit 2302 is configured to determine the animation information and the moving direction of the animation according to the motion information when the detecting unit 2301 detects the motion information of the object in the target video window.
  • the window determining unit 2303 is configured to determine an end point video window from each video window on the video interaction interface according to the animation moving direction.
  • the animation display unit 2304 is configured to perform animation display on the end video window according to the animation information.
  • the animation display unit 2304 is configured to: deform the end video window according to the animation information.
  • the animation display unit 2304 is configured to: display an image corresponding to the animation information in the end video window, and control the image to move in the end video window.
  • the animation display unit 2304 includes:
  • the image display subunit 23041 is configured to display a corresponding image in the target video window according to the animation information.
  • the control subunit 23042 is configured to control the image to move toward the target end position of the end video window according to the moving direction of the animation.
  • the position determining sub-unit 23043 is configured to determine a motion end position within the end video window and update the target end position to the motion end position.
  • the action information includes an action type and a motion direction; the information determining unit 2302 is configured to determine corresponding animation information according to the action type, and determine a corresponding animation motion direction according to the action direction.
  • the animation implementing apparatus may further include: a motion tracking unit 2305; and a motion tracking unit 2305, configured to: when the detecting unit 2301 detects motion information of the object in the target video window, the target video The object in the window performs real-time motion tracking; the animation motion direction is updated according to the tracked motion information.
  • the window determining unit 2303 is configured to: determine a window determination area of the candidate video window, where the candidate video window is a video window other than the target video window in the video interaction interface; and draw on the video interaction interface according to the animation motion direction. Corresponding straight line; determining a target window judgment area for direct line contact; and selecting a candidate video window corresponding to the target window judgment area as an end point video window.
  • the location determining sub-unit 23043 is configured to: perform motion detection on an object in the endpoint video window; and when detecting that the motion information of the object in the endpoint video window meets the preset image receiving condition, according to the endpoint video window The motion information of the object determines the motion end position of the image motion within the target video window.
  • control subunit 23042 is configured to: rotate the image according to the moving direction of the animation; and control the rotated image to follow the animation motion when the current motion information of the object meets the preset image motion triggering condition in the target video window The direction moves toward the target end position.
  • the window determining unit 2303 is further configured to: determine that the motion end position of the image motion is the boundary position of the target video window when the line does not contact any of the window determination regions.
  • the video window that arrives preferentially is the video window that the image arrives first.
  • the implementation manner of determining the target window determining region when the image is moved according to the moving direction of the animation may include: drawing a straight line on the video interactive interface according to the moving direction of the animation, and determining that the first determining window of the straight line is the target of the image priority arrival The window judges the area.
  • control subunit 23042 is further configured to: when the image moves to the target end position, control the image to stay at the target end position for a preset duration; during the image stay, according to the current motion information of the object in the end video window. The image moves accordingly.
  • the foregoing units may be implemented as a separate entity, or may be implemented in any combination, and may be implemented as the same or a plurality of entities.
  • the foregoing method embodiments and details are not described herein.
  • the animation implementation device may be integrated into the terminal, for example, in the form of a client, and the terminal may be a device such as a mobile phone or a tablet computer.
  • the animation implementing apparatus of the embodiment of the present invention uses the detecting unit 2301 to perform motion detection on an object in each video window on the video interaction interface; when detecting that the motion information of the object in the target video window meets the preset image triggering condition,
  • the information determining unit 2302 determines the animation information and the moving direction of the animation according to the motion information; determining, by the window determining unit 2303, the ending video window from each video window on the video interactive interface according to the moving motion direction; and the ending video by the animation displaying unit 234 according to the animation information
  • the window is animated.
  • the solution can realize cross-window animation on the video interaction interface based on the action information of the object in the video window; for each interaction party of the multi-person video interaction, each video interaction party terminal only needs to perform the same operation, that is, the video window.
  • the motion information detection of the internal object and the corresponding cross-window animation based on the detection result can realize the animation effect synchronization in the multi-person video chat or the multi-person video live broadcast, without transmitting the animation image to other video chat or video. Live user terminals, therefore, can save network resources and improve the synchronization of animation effects.
  • the embodiment of the present application further provides a terminal, which may be a device such as a mobile phone or a tablet computer.
  • an embodiment of the present application provides a terminal 2600, which may include one or more processor cores 2601, a memory 2602 of one or more computer readable storage media, and a radio frequency (RF) circuit 2603. , power supply 2604, input unit 2605, and display unit 2606 and other components.
  • RF radio frequency
  • the processor 2601 is a control center of the terminal, which connects various parts of the entire terminal by various interfaces and lines, by running or executing software programs and/or modules stored in the memory 2602, and calling data stored in the memory 2602, Perform various functions and processing data of the terminal to monitor the terminal as a whole.
  • the processor 2601 may include one or more processing cores; preferably, the processor 2601 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, and the like.
  • the modem processor primarily handles wireless communications. It can be understood that the above modem processor may not be integrated into the processor 2601.
  • the memory 2602 can be used to store software programs and modules, and the processor 2601 executes various functional applications and data processing by running software programs and modules stored in the memory 2602.
  • the RF circuit 2603 can be used for receiving and transmitting signals during the transmission and reception of information, in particular, after receiving the downlink information of the base station, and processing it by one or more processors 2601; in addition, transmitting data related to the uplink to the base station.
  • the terminal also includes a power source 2604 (such as a battery) for powering various components.
  • the power source can be logically connected to the processor 2601 through a power management system to manage functions such as charging, discharging, and power management through the power management system.
  • the power supply 2604 can also include any one or more of a DC or AC power source, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
  • the terminal can also include an input unit 2605 that can be used to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function controls.
  • an input unit 2605 can be used to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function controls.
  • the terminal can also include a display unit 2606 that can be used to display information entered by the user or information provided to the user, as well as various graphical user interfaces of the terminal, which can be represented by graphics, text, icons, video, and It is composed of any combination.
  • the display unit 2608 can include a display panel. Alternatively, the display panel can be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
  • LCD Liquid Crystal Display
  • OLED Organic Light-Emitting Diode
  • the processor 2601 in the terminal loads the executable file corresponding to the process of one or more applications into the memory 2602 according to the following instruction, and is stored in the memory by the processor 2601.
  • the application in 2602 to implement various functions.
  • an electronic device comprising a memory and a processor, the memory storing computer readable instructions, when the computer readable instructions are executed by the processor, causing the processor to perform the following steps: on the video interaction interface
  • the object in each video window performs motion detection; when the motion information of the object in the target video window is detected, the animation information and the motion direction of the animation are determined according to the motion information; and the motion direction of the animation is determined from each video window on the video interaction interface.
  • the end video window and animating the end video window according to the animation information.
  • the computer readable instructions when executed by the processor, causing the processor to perform the step of animating the end video window in accordance with the animation information, performing the step of deforming the end video window based on the animation information.
  • the computer readable instructions when executed by the processor, causing the processor to perform the step of animating the end video window according to the animation information, performing the step of: displaying the corresponding animation information in the end video window Image and control the image to move within the endpoint video window.
  • the computer readable instructions when executed by the processor, causing the processor to perform the step of animating the end video window according to the animation information, performing the step of: displaying the corresponding in the target video window according to the animation information
  • the image is controlled to move toward the target end position of the end video window according to the moving direction of the animation; and the end position of the motion is determined in the end video window, and the target end position is updated to the end position of the motion.
  • the action information includes an action type and a motion direction; when the computer readable instructions are executed by the processor, causing the processor to perform the following steps when performing the step of determining the animation information according to the motion information and the motion direction of the animation:
  • the type determines the corresponding animation information, and determines the corresponding animation motion direction according to the motion direction.
  • the computer readable instructions are executed by the processor such that the processor further performs the following steps: performing real-time motion tracking of the objects within the target video window when the motion information of the object within the target video window is detected; Updates the direction of motion of the animation based on the tracked motion information.
  • the action information includes an action type and a motion direction; when the computer readable instructions are executed by the processor, causing the processor to perform the step of controlling the image to move toward the target end position of the end video window in accordance with the animation motion direction The following steps: rotating the image according to the moving direction of the animation; and when the current motion information of the object meets the preset image motion triggering condition in the target video window, the rotated image is controlled to move toward the target end position according to the moving motion direction.
  • the computer readable instructions when executed by the processor, causing the processor to perform the step of determining an endpoint video window from each of the video windows on the video interaction interface in accordance with the animation motion direction, performing the step of determining the candidate video
  • the window judgment area of the window, the candidate video window is a video window other than the target video window in the video interaction interface; draw a corresponding line on the video interaction interface according to the animation motion direction; determine the target window judgment area of the line priority contact; and target The candidate video window corresponding to the window judgment area is used as the end video window.
  • the processor when the computer readable instructions are executed by the processor, causing the processor to perform the step of determining an end position of the motion within the endpoint video window, performing the following steps: performing motion detection on the object within the endpoint video window; When it is detected that the motion information of the object in the end point video window satisfies the preset image receiving condition, the motion end position of the image motion is determined in the target video window according to the motion information of the object in the end video window.
  • the computer readable instructions when executed by the processor, causing the processor to perform the step of determining a motion endpoint position within the endpoint video window, performing the step of: drawing on the video interaction interface in accordance with the animation motion direction In the case where the straight line does not come into contact with any of the window determination regions, it is determined that the motion end position of the image motion is the boundary position of the target video window.
  • the computer readable instructions are executed by the processor such that the processor further performs the steps of: controlling the image to stay at the target end position for a predetermined length of time when the image is moved to the target end position; and during the image stay, The image is controlled to move according to the current motion information of the object in the end video window.
  • the terminal may implement cross-window animation on the video interaction interface based on the action information of the object in the video window; for each interaction party of the multi-person video interaction, each video interaction terminal only needs to perform the same operation, That is, the action information detection of the object in the video window and the corresponding cross-window animation based on the detection result can realize the animation effect synchronization in the multi-person video chat or the multi-person video live broadcast, without transmitting the animation image to other videos. Chat or video live user terminals, so you can save network resources.
  • a non-transitory computer readable storage medium storing computer readable instructions, the computer readable instructions being executed by one or more processors, causing the one or more processors to perform the steps of:
  • the object in each video window on the video interaction interface performs motion detection; when the motion information of the object in the target video window is detected, the animation information and the motion direction of the animation are determined according to the motion information; and each direction from the video interaction interface according to the motion direction of the animation
  • the end point video window is determined in the video window; and the end point video window is animated according to the animation information.
  • the computer readable instructions when executed by the processor, causing the processor to perform the step of animating the end video window in accordance with the animation information, performing the step of deforming the end video window based on the animation information.
  • the computer readable instructions when executed by the processor, causing the processor to perform the step of animating the end video window according to the animation information, performing the step of: displaying the corresponding animation information in the end video window Image and control the image to move within the endpoint video window.
  • the computer readable instructions when executed by the processor, causing the processor to perform the step of animating the end video window according to the animation information, performing the step of: displaying the corresponding in the target video window according to the animation information
  • the image is controlled to move toward the target end position of the end video window according to the moving direction of the animation; and the end position of the motion is determined in the end video window, and the target end position is updated to the end position of the motion.
  • the action information includes an action type and a motion direction; when the computer readable instructions are executed by the processor, causing the processor to perform the following steps when performing the step of determining the animation information according to the motion information and the motion direction of the animation:
  • the type determines the corresponding animation information, and determines the corresponding animation motion direction according to the motion direction.
  • the computer readable instructions are executed by the processor such that the processor further performs the following steps: performing real-time motion tracking of the objects within the target video window when the motion information of the object within the target video window is detected; Updates the direction of motion of the animation based on the tracked motion information.
  • the action information includes an action type and a motion direction; when the computer readable instructions are executed by the processor, causing the processor to perform the step of controlling the image to move toward the target end position of the end video window in accordance with the animation motion direction The following steps: rotating the image according to the moving direction of the animation; and when the current motion information of the object meets the preset image motion triggering condition in the target video window, the rotated image is controlled to move toward the target end position according to the moving motion direction.
  • the computer readable instructions when executed by the processor, causing the processor to perform the step of determining an endpoint video window from each of the video windows on the video interaction interface in accordance with the animation motion direction, performing the step of determining the candidate video
  • the window judgment area of the window, the candidate video window is a video window other than the target video window in the video interaction interface; draw a corresponding line on the video interaction interface according to the animation motion direction; determine the target window judgment area of the line priority contact; and target The candidate video window corresponding to the window judgment area is used as the end video window.
  • the processor when the computer readable instructions are executed by the processor, causing the processor to perform the step of determining an end position of the motion within the endpoint video window, performing the following steps: performing motion detection on the object within the endpoint video window; When it is detected that the motion information of the object in the end point video window satisfies the preset image receiving condition, the motion end position of the image motion is determined in the target video window according to the motion information of the object in the end video window.
  • the computer readable instructions when executed by the processor, causing the processor to perform the step of determining a motion endpoint position within the endpoint video window, performing the step of: drawing on the video interaction interface in accordance with the animation motion direction In the case where the straight line does not come into contact with any of the window determination regions, it is determined that the motion end position of the image motion is the boundary position of the target video window.
  • the computer readable instructions are executed by the processor such that the processor further performs the steps of: controlling the image to stay at the target end position for a predetermined length of time when the image is moved to the target end position; and during the image stay, The image is controlled to move according to the current motion information of the object in the end video window.
  • the computer readable storage medium can implement cross-window animation on the video interaction interface based on the action information of the object in the video window; for each interaction party of the multi-person video interaction, each video interaction party terminal only needs to perform the same operation. That is, the motion information detection of the object in the video window and the corresponding cross-window animation based on the detection result can realize the animation effect synchronization in the multi-person video chat or the multi-person video live broadcast, without transmitting the animation image with the animation effect to Other video chats or live video user terminals, therefore, can save network resources.
  • the program may be stored in a computer readable storage medium, and the storage medium may include: Read Only Memory (ROM), Random Access Memory (RAM), disk or optical disk.
  • ROM Read Only Memory
  • RAM Random Access Memory

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Computer Graphics (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种动画实现方法,包括:对视频交互界面上各视频窗口内的对象进行动作检测;当检测到目标视频窗口内的对象的动作信息时,根据动作信息确定动画信息以及动画运动方向;根据动画运动方向从视频交互界面上的各视频窗口中确定终点视频窗口;及按照动画信息针对终点视频窗口进行动画展示。

Description

动画实现方法、终端及存储介质
相关申请的交叉引用
本申请要求于2017年12月14日提交中国专利局,申请号为201711339423.6、发明名称为“一种动画实现方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及视频技术领域,具体涉及一种动画实现方法、终端及存储介质。
背景技术
随着移动通信的数据无线传输速度的大幅提升,视频直播或聊天越来越被用户广泛使用,受到用户的欢迎。
为满足多个用户同时进行视频直播或聊天,目前视频直播产品均支持多人视频直播模式。在多人视频直播模式下终端可以将所有聊天用户的视频画面在视频直播界面中显示,这样用户便可以看到自己以及其他直播用户的视频画面。
为了增加视频直播产品的趣味性,一些视频直播产品还增加了视频互动功能,比如增加表情等动画效果功能,使得多个其他用户可以看到自己带有表情的视频画面。具体地,终端可以对用户自己的视频画面添加相应的表情,然后,将添加表情后的视频画面通过网络发送给视频接收方终端进行显示,从而实动画效果的同步。
然而,目前视频互动方案需要通过网络将带有表情的视频画面发送给其他视频聊天用户的终端,会存在浪费网络资源的问题。
发明内容
根据本申请的各种实施例提供一种动画实现方法、终端及存储介质。
一种动画实现方法,由终端执行,所述终端包括存储器和处理器,所述 方法包括:对视频交互界面上各视频窗口内的对象进行动作检测;
当检测到目标视频窗口内的对象的动作信息时,根据所述动作信息确定动画信息以及动画运动方向;
根据所述动画运动方向从所述视频交互界面上的各视频窗口中确定终点视频窗口;及
按照所述动画信息针对所述终点视频窗口进行动画展示。
一种终端,包括存储器和处理器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行以下步骤:
对视频交互界面上各视频窗口内的对象进行动作检测;
当检测到目标视频窗口内的对象的动作信息时,根据所述动作信息确定动画信息以及动画运动方向;
根据所述动画运动方向从所述视频交互界面上的各视频窗口中确定终点视频窗口;及
按照所述动画信息针对所述终点视频窗口进行动画展示。
一种非易失性的计算机可读存储介质,存储有计算机可读指令,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:
对视频交互界面上各视频窗口内的对象进行动作检测;
当检测到目标视频窗口内的对象的动作信息时,根据所述动作信息确定动画信息以及动画运动方向;
根据所述动画运动方向从所述视频交互界面上的各视频窗口中确定终点视频窗口;及
按照所述动画信息针对所述终点视频窗口进行动画展示。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征、目的和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的视频交互系统的场景示意图;
图2是本申请实施例提供的动画实现方法的流程示意图;
图3是本申请实施例提供的第一动画示意图;
图4是本申请实施例提供的第二种动画示意图;
图5是本申请实施例提供的第三种动画示意图;
图6是本申请实施例提供的第四种动画示意图;
图7是本申请实施例提供的窗口判断区域示意图;
图8是本申请实施例提供的终点窗口判断示意图;
图9是本申请实施例提供的另一种终点窗口判断示意图;
图10是本申请实施例提供的窗口抖动动画示意图;
图11是本申请实施例提供的捶打动画示意图;
图12是本申请实施例提供的第五种动画示意图;
图13是本申请实施例提供的第六种动画示意图;
图14是本申请实施例提供的第七种动画示意图;
图15是本申请实施例提供的第八种动画示意图;
图16是本申请实施例提供的第九种动画示意图;
图17是本申请实施例提供的动画实现方法的另一流程示意图;
图18是本申请实施例提供的动画实现方法的框架示意图;
图19是本申请实施例提供的第十种动画示意图;
图20是本申请实施例提供的第十一种动画示意图;
图21是本申请实施例提供的第十二种动画示意图;
图22是本申请实施例提供的第十三种动画示意图;
图23是本申请实施例提供的动画实现装置的第一种结构示意图;
图24是本申请实施例提供的动画实现装置的第二种结构示意图;
图25是本申请实施例提供的动画实现装置的第三种结构示意图;
图26是本申请实施例提供的终端的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请实施例提供一种视频交互系统,该系统包括本申请实施例任一提供的动画实现装置,该动画实现装置可以集成在终端中,该终端为手机、平板电脑等设备。此外,视频交互系统还可以包括其他设备,如服务器等设备。服务器用于对终端上传的视频进行转发。
参考图1,本申请实施例提供了一种视频交互系统,包括:终端10、服务器20、终端40,终端50;终端10与服务器20通过网络30连接、终端40与服务器20通过网络60连接、终端50与服务器20通过网络70连接。其中,网络30、网络60、网络70中包括路由器、网关等等网络实体,图中并为示意出。终端10、终端40、终端50可以通过有线网络或无线网络与服务器20进行消息交互,比如可以从服务器20下载应用(如视频直播应用)和/或应用更新数据包和/或与应用相关的数据信息或业务信息。其中,终端10可以为手机、平板电脑、笔记本电脑等设备,图1是以终端10为手机为例。该终端10中可以安装有各种用户所需的应用,比如,具备娱乐功能的应用(如视频直播应用,音频播放应用,阅读软件),又如具备服务功能的应用(如地图导航应用、团购应用等)。
基于上述图1所示的系统,以视频直播应用为例,终端10可以通过网络30从服务器20中按照需求下载视频直播应用和/或视频直播应用更新数据包和/或与视频直播应用相关的数据信息或业务信息(如视频数据等)。采用本申请实施例,在终端10与终端40、终端50进行视频交互时,终端10可以对视频交互界面上各视频窗口内的对象进行动作检测;当检测到目标视频窗口内的对象的动作信息时,根据动作信息确定动画信息以及动画运动方向; 根据动画运动方向从视频交互界面上的各视频窗口中确定终点视频窗口;按照动画信息针对终点视频窗口进行动画展示。
其中,终端40、终端50执行与终端10同样的操作。终端10、终端40、终端50分别在视频交互界面上显示相同的动画效果,即针对终点视频窗口进行动画展示。
上述图1的例子只是实现本申请实施例的一个系统架构实例,本申请实施例并不限于上述图1所示的系统结构,基于该系统架构,提出本申请各个实施例。
在一实施例中,提供了一种动画实现方法,可以由终端的处理器执行,如图2所示,该动画实现方法的具体流程如下:
步骤201、对视频交互界面上各视频窗口内的对象进行动作检测。
其中,视频交互界面包括多个视频窗口,每个视频窗口内显示有相应的视频画面。
比如,视频交互界面可以为多人视频交互界面,也即多个用户进行视频交互时的视频交互界面。其中,多人视频交互界面上包括多个视频窗口,每个视频窗口内显示有相应用户的视频画面。
其中,视频窗口内的对象为视频窗口显示的视频画面中的对象。该对象为视频画面的对象主体,该对象主体可以根据实际需求设定,比如,该对象主体可以为猫、狗等宠物,人等等。
比如,视频窗口内对象可以为视频窗口显示的视频画面中的用户。
其中,对象的动作可以包括视频画面中对象的位置变化,比如,当对象为用户时,对象的动作可以是视频画面中用户的位置变化,包括用户的头部,手部,脸部、身体,脚部等部位的位置变化。手部的位置变化或动作可称为手势。用户的整个身体的位置变化或者动作可称为姿势,脸部的位置变化、或动作即为用户表情。
实际应用中,终端可以通过网络从服务器获取多人直播成员的视频数据,然后,在视频交互界面中相应的视频窗口中对相应成员的视频数据进行显示。
例如,参考图3,在四人视频交互如视频直播时,终端可以在视频交互界面中显示有四个视频窗口,即视频窗口a、b、c、d,每个视频窗口显示相应 用户的视频画面。此时,终端可以分别对视频窗口a、b、c、d的视频画面中的用户进行动作检测。
步骤202、当检测到目标视频窗口内的对象的动作信息时,根据动作信息确定动画信息以及动画运动方向。
其中,目标视频窗口为视频交互界面上的任一视频窗口,或者对象做动作的视频窗口。比如,当某个视频窗口内的对象开始做动作,那么该视频窗口即为目标视频窗口。
其中,对象的动作信息可以对象的位置变化信息,比如,当对象为用户时,动作信息可以包括用户各部位的动作信息,比如,手部动作信息、头部动作信息、腿部动作信息等等。
在一实施例中,动作信息可以包括动作类型和/或动作方向;此时,可以根据动作类型确定相应的动画信息,根据动作方向确定动画运动方向。也即,步骤“根据动作信息确定动画信息以及动画运动方向”的步骤可以包括:根据动作类型确定相应的动画信息,根据动作方向确定相应的动画运动方向。
基于动作类型确定动画信息的方式可以有多种,比如,在一实施例中,可以预设设置动作类型与动画信息之间的映射关系,在检测到动作信息后,便可以基于该映射关系获取相应的动画信息。
基于动作方向确定动运动方向的方式有多种,可以根据实际需求设定。比如,在一实施例中,可以将动作方向直接作为动画运动方向,也可以将动作方向的反方向作为动画运动方向,或者还可以将与动作方向垂直的垂直方向作为动画运动方向等等。
其中,动作类型可以根据实际需求划分,比如,以手部动作为例,可以将手部动作划分为:画爱心、扔东西、握拳、射箭、敲打、拉拽、剪刀手势、Ok手势等等。
又比如,以头部动作为例,可以将头部动作划分为:摇头、点头、倾斜头部等等。
又比如,以身体动作为例,可以将身体动作划分为:左右摇摆、上下摇摆、倾斜身体等等。
其中,动作方向为对象的动作的运动方向或者运动趋势方向等。比如, 用户向右扔东西时,该动作方向可以为右边方向等。
其中,动画信息可以包括:动画触发位置、动画类型、动画持续时长等等与动画相关信息。
其中,动画类型可以根据实际需求划分,比如,可以将动画类型划分成:视频窗口变形(如视频窗口抖动、形变等)、视频窗口内显示动画(如视频窗口内显示)、视频窗口内显示运动的动画、视频窗口内显示动画并向其他视频窗口运动等等。
又比如,动画类型还可以划分成:心形动画、炸弹动画、弓箭动画等等。
在一实施例中,在检测到动作信息后,可以判断动作信息是否满足预设动画触发条件,若满足,则根据动作信息确定动画信息以及动画运动方向。
其中,预设动画触发条件为触发动画的动作条件,可以根据实际需求设定。比如,动作信息包括动作类型时,预设动画触发条件可以包括动作类型为预设动作类型。例如,当对象为用户时,预设动画触发条件可以包括:用户的手部动作为预设手部动作类型、即用户的手势为预设手势即特定手势;如用户的手势为画预定图案(如心形、圆形等)的手势、戳东西的手势、用户的手势为握拳手势等。
又比如,预设动画触发条件还可以包括:用户的头部动作类型为预设头部动作类型,即头部动作为特定头部动作,如用户摇头等;用户的身体动作为特定身体动作,即用户的姿势为特定姿势,比如,用户扭腰等。
又比如,预设动画触发条件还可以包括:用户的脸部表情类型为预设表情类型,即脸部表情为特定表情,比如,用户愤怒张嘴、开心大笑等等。
在一实施例中,检测到的对象的动作信息可以包括:对象的动作类型、对象的动作在视频窗口内的位置信息以及对象的动作方向信息;此时,步骤“根据动作信息确定动画信息以及动画运动方向”,可以包括:根据位置信息在目标视频窗口内确定相应的动画触发位置;根据动作方向信息确定动画运动方向;根据动作类型确定需要触发的动画类型。
在一实施例中,可以预设设置动作类型与动画类型之间的映射关系,在检测到动作信息后,可以基于当前动作类型和该映射关系确定需要触发的动画类型。
例如,参考图4,当检测到视频窗口a中用户作出甩东西手势时,可以根据手势位置确定动画触发位置,根据手势运动方向确定动画运动方向a’,根据动作类型“甩东西”确定动画类型“扔炸弹”,此时,可以根据动画触发位置在视频窗口a中显示炸弹图像,并控制炸弹图像按照动画运动方向a’运动。
步骤203、根据动画运动方向从视频交互界面上的各视频窗口中确定终点视频窗口。
其中,终点视频窗口为需要最终实现动画的视频窗口。比如,当动画为图像运动时,终点视频窗口可以为图像最终达到的视频窗口;当动画为视频窗口变形时,该终端视频窗口为需要进行变形的视频窗口;当动画为图像运动时,终点视频窗口可以为需要显示图像,且图像运动的视频窗口。
在一实施例中,基于动画运动方向确定终点视频窗口的方式可有多种,比如,为提升终点视频窗口的确定速度,可以预先在视频交互界面中设置视频窗口的窗口判断区域,然后,基于动画运动方向与各视频窗口的窗口判断区域来确定终点视频窗口。
在一实施例中,步骤“根据运动方向确定图像运动的终点视频窗口”,可以包括:确定候选视频窗口的窗口判断区域,候选视频窗口为视频交互界面中除目标视频窗口以外的视频窗口;按照动画运动方向在视频交互界面上绘制相应的直线;确定直线优先接触的目标窗口判断区域;及将目标窗口判断区域对应的候选视频窗口作为终点视频窗口。
其中,窗口判断区域为视频交互界面中用于判断终点视频窗口的区域。视频交互界面中包括每个视频窗口对应的窗口判断区域。在一实施例中,所有窗口判断区域的面积之和等于所有视频窗口面积之和。
其中,视频窗口的窗口判断区域可以根据实际需求划分,视频窗口的窗口判断区域包括该视频窗口的至少一部分区域。
下面以动画为图像运动为例来介绍终点视频窗口的确定方式:
比如,参考图7,视频窗口a为目标视频窗口,此时,视频窗口b的窗口判断区域为区域701,视频窗口c的窗口判断区域为区域702、视频窗口d的窗口判断区域为区域703。区域701包含窗口b的一部分,区域703包含整个视频窗口d,以及视频窗口b、c的一部分。
其中,优先到达的视频窗口为图像最先到达的视频窗口。确定图像按照动画运动方向运动时优先到达的目标窗口判断区域的实现方式可以包括:按照动画运动方向在视频交互界面上画一条直线,确定该直线最先接触的窗口判断区域为图像优先到达的目标窗口判断区域。
参考图8,当图像700按照运动方向a”运动时,可以判断图像700最先达到视频窗口b的窗口判断区域701,具体地,可以按照运动方向a”在视频交互界面上画一条直线,从图中可知该直线最先接触视频窗口b的窗口判断区域701,此时视频窗口b即为图像达到的终点视频窗口。
在一实施例中,当图像按照动画运动方向运动时不与任何窗口判断区域接触的情况下,确定图像运动的运动终点位置为目标视频窗口的边界位置。比如,运动终点位置可以为图像按照动画运动方向运动时所到达的目标视频窗口边界位置。
比如,参考图9,当图像700按照运动方向b’运动时不与视频窗口b、c、d接触,因此确定图像运动的运动终点位置为视频窗口的边界位置。
步骤204、按照动画信息针对终点视频窗口进行动画展示。
其中,动画展示的方式可以包括多种,如下:
(1)、终点视频窗口变形。
步骤“按照动画信息针对终点视频窗口进行动画展示”,包括:根据动画信息对终点视频窗口进行变形。
其中,视频窗口的变形可以包括:视频窗口的形状变化、位置变化、背景变化等类型。该位置变化可以包括:视频窗口抖动、视频窗口旋转、视频窗口跳动等等。
在一实施例中,动画信息可以包括视频窗口变形类型,此时,可以根据窗口变形类型对应终点视频窗口进行变形。
比如,参考图10当视频窗口c中用户对着视频窗口d作出“戳”的动作时,此时,可以检测到“戳”的动作信息(如动作类型、动作方向),基于动作信息确定动画运动方向c’以及窗口变形类型(若窗口抖动),根据动画运动方向c’确定终点视频窗口为视频窗口d,那么此时,将会控制视频窗口d抖动。
又比如,当视频窗口c中用户对着视频窗口d多次作出“吹气”的动作 时,此时,可以检测到“吹气”的动作信息(如动作类型、动作方向),基于动作信息确定动画运动方向以及窗口变形类型(如窗口跳动),根据动画运动方向确定终点视频窗口为视频窗口d,那么此时,将会控制视频窗口d跳动。
还比如,当视频窗口a中用户对着视频窗口b作出“旋转”的动作时,此时,可以检测到“旋转”的动作信息(如动作类型、动作方向),基于动作信息确定动画运动方向以及窗口变形类型(如窗口旋转),根据动画运动方向确定终点视频窗口为视频窗口b,那么此时,将会控制视频窗口b旋转。
(2)、终点视频窗口内实现动画。
也即,步骤“按照动画信息针对终点视频窗口进行动画展示”包括:在终点视频窗口内显示与动画信息对应的图像,并控制图像在终点视频窗口内运动。
其中,动画信息可以包括动画的主体图像的图像类型,此时,可以在终点视频窗口内显示与图像类型对应的图像,并控制该图像在终点视频窗口内运动。其中,图像类型可以根据实际需求划分,比如可以包括:锤子图像、铁锹图像、刀剑图像、火焰图像等等。
比如,参考图11,当视频窗口c中用户对着视频窗口d作出“捶打”的动作时,此时,可以检测到“捶打”的动作信息(如动作类型、动作方向),基于动作信息确定动画运动方向d’以及动画主体图像的类型(如“锤子”),根据动画运动方向d’确定终点视频窗口为视频窗口d,那么此时,将视频窗口d显示一个“锤子”图像,并控制“锤子”图像在视频窗口d中运动,如不停敲打。
在一实施例中,还可以基于终点视频窗口内对象的位置信息确定动画触发位置,然后,根据动画触发位置显示与动画信息对应的图像,并控制图像在终点视频窗口内运动。
在一实施例中,还可以获取目标视频窗口内对象的动作频率,基于动作频率控制图像在终点视频窗口内运动,如可以获取频窗口c中“捶打”的动作频率,在动画展示时,可以基于该动作频率控制图像在终点视频窗口内中锤子”图像的运动频率。
(3)、跨视频窗口动画。
此时,步骤“按照动画信息针对终点视频窗口进行动画展示”包括:根据动画信息在目标视频窗口内显示相应的图像;控制图像按照动画运动方向向终点视频窗口的目标终点位置运动;在终点视频窗口内确定运动终点位置,并将目标终点位置更新为运动终点位置。
其中,显示的图像可以为静态图像,也可以为动态图像,比如可以包括动态表情、动效贴图;如动态的心形、动态的弓箭、动态的拳头、动态的飞吻等等。该图像的类型可以根据实际需求划分,比如划分成表达爱意图像、射击图像、搞笑图像等等。
其中,动画信息可以包括动画主体图像的类,该图像的类型可以与对象的动作类型对应,具体地,可以根据动作类型确定待显示图像的图像类型;根据图像类型在目标视频窗口内相应的位置显示相应的图像。
此外,动画信息还可以包括动画触发位置,此时,可以根据动画触发位置在目标视频窗口内显示相应的图像。
其中,运动终点位置为图像运动的实际终点位置,当图像运动到该运动终点位置时停止运动。初始阶段时,目标终点位置可以根据实际需求设定,比如,终点视频窗口内的任一位置。
运动终点位置确定方式可以有多种,比如,可以选取终点视频窗口的中心位置作为运动终点位置,或者,选取终端视频窗口内的其他位置作为运动终点位置。
例如,参考图4和图5,当检测到视频窗口a中用户作出甩东西手势时,可以根据手势位置确定动画触发位置,根据手势运动方向确定动画运动方向a’,根据动作类型“甩东西”确定动画类型“扔炸弹”,此时,可以根据动画运动方向a’确定终点视频窗口为视频窗口b,然后,可以根据动画触发位置在视频窗口a中显示炸弹图像,并控制炸弹图像按照动画运动方向a’向终点视频窗口b的目标终点位置运动。
在炸弹图像运动的过程中,可以在终点视频窗口内确定运动终点位置如视频窗口b的中心位置,此时,更新目标终点位置为运动终点位置。如图5所示,可以控制炸弹图像向视频窗口b的中心位置运动,运动到该位置时停止运动。
在一实施例中,当图像包括多个子图像时,可以控制目标子图像在视频交互界面上按照图像运动方向向目标终点位置运动。比如,当图像为弓箭图像时可以控制箭图像按照图像运动方向向目标终点位置运动。
在一实施例中,还可以在显示图像后,可以根据当前动画运动方向对图像的进行调整,使得图像与对象动作更贴切;并且还可以设置图像运动触发条件,使得图像效果更准确。步骤“控制图像按照动画运动方向向终点视频窗口的目标终点位置运动”,包括:根据动画运动方向对图像进行旋转;及在目标视频窗口内对象的当前动作信息满足预设图像运动触发条件时,控制旋转后的图像按照动画运动方向向终点视频窗口的目标终点位置运动。
本申请实施例可以在动作信息满足预设图像触发条件后,持续对目标视频窗口内对象进行动作检测,当后续检测到的动作信息满足一定条件时,则触发图像运动。
其中,预设图像运动触发条件可以触发图像运动的条件,可以根据实际需求设定。比如,预设图像触发条件包括用户作出第一特定手势,该预设图像运动触发条件可以包括用户作出第二特定手势。该预设图像运动触发条件可以与预设图像触发条件相关联;比如,第二特定手势与第一特定手势为连续的手势或者相关联的手势。
在一实施例中,为了增加图像效果的互动性,可以基于终点视频窗口内对象的动作信息来确定运动终点位置。也即,步骤“在终点视频窗口内确定相应的运动终点位置”,可以包括:对终点视频窗口内对象的动作进行检测;及当检测到终点视频窗口内对象的动作信息满足预设图像接收条件时,根据终点视频窗口内对象的动作信息,在目标视频窗口内确定图像运动的运动终点位置。
其中,终端视频窗口内的对象为终点视频窗口显示的视频画面中的对象。该对象为视频画面的对象主体,该对象主体可以根据实际需求设定,比如,该对象主体可以为猫、狗等宠物,人等等。
其中,终点视频窗口内对象的动作信息可以包括对象的动作在终点视频窗口内的位置信息,本申请实施例可以基于该位置信息在终点视频窗口内确定图像的运动终点位置。
其中,预设图像接收条件为触发图像接收的动作条件,该接收条件可以根据实际需求设定。比如,当对象为用户时,预设图像接收条件可以包括:用户的手部动作为预设手部动作、即用户的手势为预设手势即特定手势。
在一实施例中,预设图像接收条件可以与预设图像触发条件或者显示的图像类型对应或者相关。比如,当图像为心形图像时,预设图像接收条件可以包括用户的动作为双手捧东西(如心形图像、圆形图像等)的动作;又比如,当图像为弓箭图像时,预设图像接收条件可以包括:用户的动作为中箭动作(如头部倾斜等);当图像为炸弹图像时,预设图像接收条件可以包括:用户的动作为中弹动作(如双手捂眼睛等)。
在一实施例中,还可以对对象进行实时动作追踪,通过追踪到的动作信息确定动画运动方向,并对初始确定的动画运动方向进行更新。比如,在控制图像按照动画运动方向向终点视频窗口的目标终点位置运动之前,本实施例方法可以包括:当检测到目标视频窗口内的对象的动作信息时,对目标视频窗口内对象进行实时运动追踪;及根据追踪到的运动信息更新动画运动方向。
比如,当检测到的动作信息满足预设动画触发条件(如动作类型为预设动作类型等)时,可以根据动作信息确定动画信息以及动画运动方向,并对目标视频窗口内对象进行实时运动追踪;然后,基于追踪到的运动信息更新该动画运动方向。
其中,追踪到的运动信息可以包括对象动作的运动的方向信息、对象动作的运动趋势信息等等。比如,用户手势的运动趋势等。
比如,当检测视频窗口a中用户的手部作出第一动作时,可以对动作进行实时运动追踪,根据第一动作信息确定动画信息(如动画主体图像类型、动画触发位置等),当视频窗口a中用户继续作出第二动作时,此时,可以基于追踪到第二动作的运动信息再次确定动画运动方向,并更新动画运动方向。
譬如,参考图12、图13以及图14,当检测到视频窗口a中用户作出画心形手势时,可以对该用户进行实时运动追踪,根据画心形手势在视频窗口a确定动画信息(动画主体图像类型、动画触发位置等)以及动画运动方向,在视频窗口a相应的位置显示图像如心形图像;参考图14,当用户继续作出 将心形图像向右击打的连续动作时,可以基于追踪到动作运动趋势信息确定动画运动方向a”,即向右运动,此时,更新动画运动方向为a”;根据运动方向a”确定终点视频窗口为视频窗口b,然后,控制心形图像按照动画运动方向a”向视频窗口b的目标终点位置运动。
参考图15,在更新动画运动方向a”后,根据动画运行方向a”对心形图像进行一定角度的旋转,当检测到用户向右击打图像时,便可以控制旋转后的心形图像按照运动方向a”运动。
参考图16,当确定终点视频窗口为视频窗口b时,可检测视频窗口b中用户的动作,当用户的动作为双手捧东西的动作时,可以基于用户的动作信息在视频窗口b中确定相应的运动终点位置F’。然后,将目标终点位置F更新为运动终点位置F’。这样心形图像将会按照运动方向a”运动到运动终点位置F’,此时,图像运动结束。
在一实施例中,当图像到运动终点位置时,可以根据运动终点位置在终点视频窗口内显示与图像相匹配的匹配图像。比如,参考图6,当图像为炸弹时,当炸弹到达视频窗口b内的运动终点位置时,可以根据运动终点位置在视频窗口b中显示与炸弹图像匹配的爆炸图像。
在一实施例中,当图像运动到目标终点位置时,控制图像在目标终点位置停留预设时长;在图像停留期间,根据终点视频窗口内对象的当前动作信息控制图像进行相应的移动。
比如,当图像到达目标终点位置时,图像在该位置停留20s,在停留期间,可以根据终点视频窗口内用户动作对图像进行相应的移动,也即图像跟随用户动作移动。
在一实施例中,当运动终点位置为视频窗口边界位置时,可以在图像到达该位置时控制图像消失。也即图像飞出视频窗口后消失。由上可知,本申请实施例采用对视频交互界面上各视频窗口内的对象进行动作检测;当检测到目标视频窗口内的对象的动作信息时,根据动作信息确定动画信息以及动画运动方向;根据动画运动方向从视频交互界面上的各视频窗口中确定终点视频窗口;按照动画信息针对终点视频窗口进行动画展示。该方案可以基于视频窗口内对象的动作信息在视频交互界面上实现跨窗口动画;对于多人视 频交互的每一交互方来说,各视频交互方终端只需进行同样的操作,也即视频窗口内对象的动作信息检测以及基于检测结果实现相应的跨窗口,便可以在多人视频聊天或多人视频直播中实现动画效果同步,无需将具有动画效果的视频画面传输给其他视频聊天或者视频直播用户终端,因此,可以节省网络资源以及提升动画效果的同步性。
此外,本申请实施例可以根据视频内用户的动作在视频交互界面上实现跨窗口动画,呈现不同的跨窗口动画反馈,让不同视频窗口的用户产生更加类似真实世界的互动体验,弱化真实空间的距离感,大大提升了视频交互的互动性。
在一实施例中,根据上述实施例所描述的方法,以下将作进一步详细说明。
本申请实施例提供了一种动画实现系统,包括终端和服务器,参考图1,终端与服务器通过网络连接。
下面将基于上述所示的动画实现系统,以跨窗口动画为例,来对本申请的动画实现方法进一步描述。
如图17和图18所示,一种动画实现方法,具体流程可以如下:
步骤1701、终端在视频交互界面上各视频窗口内显示相应用户的视频画面。
其中,视频交互界面包括多个视频窗口,每个视频窗口内显示有相应用户的视频画面。
比如,当有4个用户进行视频交互时,可以在视频交互界面上4个视频窗口分别显示相应用户的视频画面。比如,视频窗口a显示用户A的视频画面,视频窗口b显示用户B的视频画面,视频窗口c显示用户C的视频画面,视频窗口d显示用户D的视频画面。
步骤1702、终端对各视频窗口的视频画面中用户的动作进行检测。
比如,终端可以分别对视频窗口a、b、c、d的视频画面中的用户进行动作检测。
步骤1703、当检测到目标视频窗口内用户的动作信息满足预设动画触发条件时,根据动作信息确定动画信息,并对目标视频窗口内用户进行实时运 动追踪。
其中,目标视频窗口为视频交互界面上的任一视频窗口,或者对象做动作的视频窗口。比如,当某个视频窗口内的对象开始做动作,那么该视频窗口即为目标视频窗口。
动作信息可以包括动作类型、动作方向、动作位置信息;此时,可以根据动作类型确定相应的动画信息,根据动作方向确定动画运动方向。
比如,终端根据动作方向确定动画运动方向,根据动作类型确定动画主体图像的图像类型,根据动作位置信息在目标视频窗口内确定相应的图像触发位置。
其中,动作类型可以根据实际需求划分,比如,以手部动作为例,可以将手部动作划分为:画爱心、扔东西、握拳、射箭、敲打、拉拽、剪刀手势、Ok手势等等。比如,当检测视频窗口d中用户D的动作满足预设图像触发条件时,可以对视频窗口d中用户D进行实时运动追踪。
其中,预设动画触发条件为触发显动画的动作条件,可以根据实际需求设定。
比如,动作信息包括动作类型时,预设动画触发条件可以包括动作类型为预设动作类型。例如,当对象为用户时,预设动画触发条件可以包括:用户的手部动作为预设手部动作类型、即用户的手势为预设手势即特定手势;如用户的手势为画预定图案(如心形、圆形等)的手势、戳东西的手势、用户的手势为握拳手势等。
其中,预设动画触发条件为触发动画的动作条件,可以根据实际需求设定。比如,动作信息包括动作类型时,预设动画触发条件可以包括动作类型为预设动作类型。例如,当对象为用户时,预设动画触发条件可以包括:用户的手部动作为预设手部动作类型、即用户的手势为预设手势即特定手势;如用户的手势为画预定图案(如心形、圆形等)的手势、戳东西的手势、用户的手势为握拳手势等。
其中,图像可以静态图像、或动态图像,动态图像可以包括动态表情、动效贴图。如动态的心形、动态的弓箭、动态的拳头、动态的飞吻等等。
其中,图像的类型可以根据实际需求划分,如根据功能划分可划分成: 表达爱意类型(如心形图像、飞吻图像等)、射击类型(如弓箭图像、枪械图像等)、搞笑类型(如表情等)等等,
比如,当检测到视频窗口d中用户D作出动作为握拳动作时,基于动作信息确定图像触发位置、图像类型以及初始动画运动方向,如图像类型为射击类图像,并同时对视频窗口d中用户D进行实时运动追踪。
步骤1704、终端根据动画信息在目标视频窗口内显示相应的图像。
比如,终端根据确定的图像类型在图像触发位置显示相应的图像。
例如,参考图19,当检测到视频窗口d中用户D作出动作为握拳动作时,基于动作信息确定图像触发位置以及图像类型,如图像类型为射击类图像。可以在视频窗口d中握拳的位置显示一弓箭图像。
步骤1705、终端根据追踪到用户的运动信息更新动画运动方向。
其中,追踪到的运动信息可以包括对象运动的方向信息、对象的运动趋势信息等等。比如,用户手势的运动趋势等。
在基于动作信息确定初始动画运动方向之后,还可以对用户的动作进行实时追踪,并基于追踪到的信息更新动画运动方向。
步骤1706、终端根据更新后的动画运动方向从视频交互界面上的各视频窗口中确定终点视频窗口。
其中,终点视频窗口为图像最终到达的视频窗口。比如,参考图20,可以根据运动方向确定终点视频窗口为视频窗口a。当图像运动到视频窗口a的相应位置图像便停止运动。
本实施例中,基于动画运动方向确定终点视频窗口的方式可有多种,比如,为提升终点视频窗口的确定速度,可以预先在视频交互界面中设置视频窗口的窗口判断区域,然后,基于动画运动方向与各视频窗口的窗口判断区域来确定终点视频窗口。具体地,终点视频窗口确定过程可以参考上述实施例的描述。
步骤1707、终端控制图像按照动画运动方向向终点视频窗口的目标终点位置运动。
当图像由多个子图像组成时,可以选取某一个子图像在视频交互界面上按照图像运动方向向目标终点位置运动。
其中,目标终点位置可以为视频交互界面中位于图像运动方向上的任一位置;该目标终点位置可以根据实际需求设定。比如,在一实施例中,该目标终点位置值可以为空。
在一实施例中,还可以设置图像运动触发条件,使得图像效果更准确;比如,在目标视频窗口内对象的当前动作信息满足预设图像运动触发条件时,控制图像按照图像运动方向在视频交互界面上向目标终点位置运动。
其中,预设图像运动触发条件可以触发图像运动的条件,可以根据实际需求设定。比如,预设图像触发条件包括用户作出第一特定手势,该预设图像运动触发条件可以包括用户作出第二特定手势。该预设图像运动触发条件可以与预设图像触发条件相关联;比如,第二特定手势与第一特定手势为连续的手势或者相关联的手势。
参考图20,当用户D左手从握拳状态放开时,便控制箭图像沿着运动方向向目标终点位置运动。
步骤1708、终端在终点视频窗口内确定运动终点位置,并将目标终点位置更新为运动终点位置。
在一实施例中,可以对终点视频窗口内对象的动作进行检测;当检测到终点视频窗口内对象的动作信息满足预设图像接收条件时,根据终点视频窗口内对象的动作信息,在目标视频窗口内确定图像运动的运动终点位置,并将目标终点位置更新为运动终点位置。其中,终点视频窗口内对象的动作信息可以包括对象的动作在终点视频窗口内的位置信息,本申请实施例可以基于该位置信息在终点视频窗口内确定图像的运动终点位置。
其中,预设图像接收条件可以与显示的图像类型对应或者相关。比如,当图像为心形图像时,预设图像接收条件可以包括用户的动作为双手捧东西(如心形图像、圆形图像等)的动作;又比如,当图像为弓箭图像时,预设图像接收条件可以包括:用户的动作为中箭动作(如头部倾斜等);当图像为炸弹图像时,预设图像接收条件可以包括:用户的动作为中弹动作(如双手捂眼睛等)。
例如,参考图21,当确定终点视频窗口为视频窗口a时,可以对视频窗口a中用户A进行动作检测,当用户A的动作满足预设图像接收条件时,可以基 于用户A的动作确定运动终点位置。比如,当用户A作出中箭动作(如头部向左偏移)时,可以基于中箭动作在窗口a中的位置信息确定运动终点位置,然后,将目标终点位置更新为该运动终点位置。此时,箭头将会射向用户A的脸部。
在一实施例中,当图像运动到目标终点位置时,控制图像在目标终点位置停留预设时长;在图像停留期间,根据终点视频窗口内对象的当前动作信息控制图像进行相应的移动。
比如,参考图22,当箭头射中用户A的脸部后,将会停留预设时长如20s等等。并且在停留期间内箭头会跟随用户A脸部运动而移动。
在一实施例中,当图像到运动终点位置时,可以根据运动终点位置在终点视频窗口内显示与图像相匹配的匹配图像。比如,参考图6c,当箭头射中用户A的脸部后,用户A的脸部相应位置会显示流血图像。
由上可知,本申请实施例采用对视频交互界面上各视频窗口内的对象进行动作检测;当检测到目标视频窗口内的对象的动作信息时,根据动作信息确定动画信息以及动画运动方向;根据动画运动方向从视频交互界面上的各视频窗口中确定终点视频窗口;按照动画信息针对终点视频窗口进行动画展示。该方案可以基于视频窗口内对象的动作信息在视频交互界面上实现跨窗口动画;对于多人视频交互的每一交互方来说,各视频交互方终端只需进行同样的操作,也即视频窗口内对象的动作信息检测以及基于检测结果实现相应的跨窗口动画,便可以在多人视频聊天或多人视频直播中实现动画效果同步,无需将具有动画效果的视频画面传输给其他视频聊天或者视频直播用户终端,因此,可以节省网络资源以及提升动画效果的同步性。
此外,本申请实施例可以根据视频内用户的动作在视频交互界面上实现跨窗口动画,呈现不同的跨窗口动画反馈,让不同视频窗口的用户产生更加类似真实世界的互动体验,弱化真实空间的距离感,大大提升了视频交互的互动性。
为了更好地实施以上方法,本申请实施例还提供动画实现装置,如图23所示,该动画实现装置可以包括:检测单元2301、信息确定2302、窗口确定单元2303以及动画展示单元2304,如下:
检测单元2301,用于对视频交互界面上各视频窗口内的对象进行动作检测。
信息确定单元2302,用于当检测单元2301检测到目标视频窗口内的对象的动作信息时,根据动作信息确定动画信息以及动画运动方向。
窗口确定单元2303,用于根据动画运动方向从视频交互界面上的各视频窗口中确定终点视频窗口。
动画展示单元2304,用于按照动画信息针对终点视频窗口进行动画展示。
在一实施例中,动画展示单元2304,用于:根据动画信息对终点视频窗口进行变形。
在一实施例中,动画展示单元2304,用于:在终点视频窗口内显示与动画信息对应的图像,并控制图像在终点视频窗口内运动
在一实施例中,参考图24,动画展示单元2304,包括:
图像显示子单元23041,用于根据动画信息在目标视频窗口内显示相应的图像。
控制子单元23042,用于控制图像按照动画运动方向向终点视频窗口的目标终点位置运动。
位置确定子单元23043,用于在终点视频窗口内确定运动终点位置,并将目标终点位置更新为运动终点位置。
在一实施例中,动作信息包括动作类型和动作方向;信息确定单元2302,用于根据动作类型确定相应的动画信息,根据动作方向确定相应的动画运动方向。
在一实施例中,参考图25,动画实现装置还可以包括:运动追踪单元2305;运动追踪单元2305,用于:当检测单元2301检测到目标视频窗口内的对象的动作信息时,对目标视频窗口内对象进行实时运动追踪;根据追踪到的运动信息更新动画运动方向。
在一实施例中,窗口确定单元2303,用于:确定候选视频窗口的窗口判断区域,候选视频窗口为视频交互界面中除目标视频窗口以外的视频窗口;按照动画运动方向在视频交互界面上绘制相应的直线;确定直线优先接触的目标窗口判断区域;将目标窗口判断区域对应的候选视频窗口作为终点视频窗口。
在一实施例中,位置确定子单元23043,用于:对终点视频窗口内的对象进行动作检测;当检测到终点视频窗口内对象的动作信息满足预设图像接收条件时,根据终点视频窗口内对象的动作信息,在目标视频窗口内确定图像运动的运动终点位置。
在一实施例中,控制子单元23042,用于:根据动画运动方向对图像进行旋转;在目标视频窗口内对象的当前动作信息满足预设图像运动触发条件时,控制旋转后的图像按照动画运动方向向目标终点位置进行运动。
在一实施例中,窗口确定单元2303,还可以用于:当直线不与任何窗口判断区域接触的情况下,确定图像运动的运动终点位置为目标视频窗口的边界位置。
其中,优先到达的视频窗口为图像最先到达的视频窗口。确定图像按照动画运动方向运动时优先到达的目标窗口判断区域的实现方式可以包括:按照动画运动方向在视频交互界面上画一条直线,确定该直线最先接触的窗口判断区域为图像优先到达的目标窗口判断区域。
在一实施例中,控制子单元23042还用于:当图像运动到目标终点位置时,控制图像在目标终点位置停留预设时长;在图像停留期间,根据终点视频窗口内对象的当前动作信息控制图像进行相应的移动。
具体实施时,以上各个单元可以作为独立的实体来实现,也可以进行任意组合,作为同一或若干个实体来实现,以上各个单元的具体实施可参见前面的方法实施例,在此不再赘述。
该动画实现装置具体可以集成终端,比如以客户端的形式集成在终端中,该终端可以为手机、平板电脑等设备。
由上可知,本申请实施例动画实现装置采用检测单元2301对视频交互界面上各视频窗口内的对象进行动作检测;当检测到目标视频窗口内对象的动作信息满足预设图像触发条件时,由信息确定单元2302根据动作信息确定动画信息以及动画运动方向;由窗口确定单元2303根据动画运动方向从视频交互界面上的各视频窗口中确定终点视频窗口;由动画展示单元234按照动画信息针对终点视频窗口进行动画展示。该方案可以基于视频窗口内对象的动作信息在视频交互界面上实现跨窗口动画;对于多人视频交互的每一交互方来 说,各视频交互方终端只需进行同样的操作,也即视频窗口内对象的动作信息检测以及基于检测结果实现相应的跨窗口动画,便可以在多人视频聊天或多人视频直播中实现动画效果同步,无需将具有动画效果的视频画面传输给其他视频聊天或者视频直播用户终端,因此,可以节省网络资源以及提升动画效果的同步性。
为了更好地实施以上方法,本申请实施例还提供了一种终端,该终端可以为手机、平板电脑等设备。
参考图26,本申请实施例提供了一种终端2600,可以包括一个或者一个以上处理核心的处理器2601、一个或一个以上计算机可读存储介质的存储器2602、射频(Radio Frequency,RF)电路2603、电源2604、输入单元2605、以及显示单元2606等部件。本领域技术人员可以理解,图26中示出的终端结构并不构成对终端的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。其中:
处理器2601是该终端的控制中心,利用各种接口和线路连接整个终端的各个部分,通过运行或执行存储在存储器2602内的软件程序和/或模块,以及调用存储在存储器2602内的数据,执行终端的各种功能和处理数据,从而对终端进行整体监控。可选的,处理器2601可包括一个或多个处理核心;优选的,处理器2601可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器2601中。
存储器2602可用于存储软件程序以及模块,处理器2601通过运行存储在存储器2602的软件程序以及模块,从而执行各种功能应用以及数据处理。
RF电路2603可用于收发信息过程中,信号的接收和发送,特别地,将基站的下行信息接收后,交由一个或者一个以上处理器2601处理;另外,将涉及上行的数据发送给基站。
终端还包括给各个部件供电的电源2604(比如电池),优选的,电源可以通过电源管理系统与处理器2601逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。电源2604还可以包括一个或一个以上的直流或交流电源、再充电系统、电源故障检测电路、电源转换器或者逆变器、 电源状态指示器等任意组件。
该终端还可包括输入单元2605,该输入单元2605可用于接收输入的数字或字符信息,以及产生与用户设置以及功能控制有关的键盘、鼠标、操作杆、光学或者轨迹球信号输入。
该终端还可包括显示单元2606,该显示单元2606可用于显示由用户输入的信息或提供给用户的信息以及终端的各种图形用户接口,这些图形用户接口可以由图形、文本、图标、视频和其任意组合来构成。显示单元2608可包括显示面板,可选的,可以采用液晶显示器(LCD,Liquid Crystal Display)、有机发光二极管(OLED,Organic Light-Emitting Diode)等形式来配置显示面板。
具体在本实施例中,终端中的处理器2601会按照如下的指令,将一个或一个以上的应用程序的进程对应的可执行文件加载到存储器2602中,并由处理器2601来运行存储在存储器2602中的应用程序,从而实现各种功能。
在一个实施例中,提供了一种电子设备,包括存储器和处理器,存储器中存储有计算机可读指令,计算机可读指令被处理器执行时,使得处理器执行以下步骤:对视频交互界面上各视频窗口内的对象进行动作检测;当检测到目标视频窗口内的对象的动作信息时,根据动作信息确定动画信息以及动画运动方向;根据动画运动方向从视频交互界面上的各视频窗口中确定终点视频窗口;及按照动画信息针对终点视频窗口进行动画展示。
在一个实施例中,计算机可读指令被处理器执行时,使得处理器在执行按照动画信息针对终点视频窗口进行动画展示的步骤时,执行以下步骤:根据动画信息对终点视频窗口进行变形。
在一个实施例中,计算机可读指令被处理器执行时,使得处理器在执行按照动画信息针对终点视频窗口进行动画展示的步骤时,执行以下步骤:在终点视频窗口内显示与动画信息对应的图像,并控制图像在终点视频窗口内运动。
在一个实施例中,计算机可读指令被处理器执行时,使得处理器在执行按照动画信息针对终点视频窗口进行动画展示的步骤时,执行以下步骤:根 据动画信息在目标视频窗口内显示相应的图像;控制图像按照动画运动方向向终点视频窗口的目标终点位置运动;及在终点视频窗口内确定运动终点位置,并将目标终点位置更新为运动终点位置。
在一个实施例中,动作信息包括动作类型和动作方向;计算机可读指令被处理器执行时,使得处理器在执行根据动作信息确定动画信息以及动画运动方向的步骤时,执行以下步骤:根据动作类型确定相应的动画信息,根据动作方向确定相应的动画运动方向。
在一个实施例中,计算机可读指令被处理器执行时,使得处理器还执行以下步骤:当检测到目标视频窗口内的对象的动作信息时,对目标视频窗口内对象进行实时运动追踪;及根据追踪到的运动信息更新动画运动方向。
在一个实施例中,动作信息包括动作类型和动作方向;计算机可读指令被处理器执行时,使得处理器在执行控制图像按照动画运动方向向终点视频窗口的目标终点位置运动的步骤时,执行以下步骤:根据动画运动方向对图像进行旋转;及在目标视频窗口内对象的当前动作信息满足预设图像运动触发条件时,控制旋转后的图像按照动画运动方向向目标终点位置进行运动。
在一个实施例中,计算机可读指令被处理器执行时,使得处理器在执行根据动画运动方向从视频交互界面上的各视频窗口中确定终点视频窗口的步骤时,执行以下步骤:确定候选视频窗口的窗口判断区域,候选视频窗口为视频交互界面中除目标视频窗口以外的视频窗口;按照动画运动方向在视频交互界面上绘制相应的直线;确定直线优先接触的目标窗口判断区域;及将目标窗口判断区域对应的候选视频窗口作为终点视频窗口。
在一个实施例中,计算机可读指令被处理器执行时,使得处理器在执行在终点视频窗口内确定运动终点位置的步骤时,执行以下步骤:对终点视频窗口内的对象进行动作检测;及当检测到终点视频窗口内对象的动作信息满足预设图像接收条件时,根据终点视频窗口内对象的动作信息,在目标视频窗口内确定图像运动的运动终点位置。
在一个实施例中,计算机可读指令被处理器执行时,使得处理器在执行在终点视频窗口内确定运动终点位置的步骤时,执行以下步骤:当按照动画运动方向在视频交互界面上绘制的直线不与任何窗口判断区域接触的情况 下,确定图像运动的运动终点位置为目标视频窗口的边界位置。
在一个实施例中,计算机可读指令被处理器执行时,使得处理器还执行以下步骤:当图像运动到目标终点位置时,控制图像在目标终点位置停留预设时长;及在图像停留期间,根据终点视频窗口内对象的当前动作信息控制图像进行相应的移动。
本申请实施例终端可以基于视频窗口内对象的动作信息在视频交互界面上实现跨窗口动画;对于多人视频交互的每一交互方来说,各视频交互方终端只需进行同样的操作,也即视频窗口内对象的动作信息检测以及基于检测结果实现相应的跨窗口动画,便可以在多人视频聊天或多人视频直播中实现动画效果同步,无需将具有动画效果的视频画面传输给其他视频聊天或者视频直播用户终端,因此,可以节省网络资源。
一种非易失性的计算机可读存储介质,存储有计算机可读指令,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:对视频交互界面上各视频窗口内的对象进行动作检测;当检测到目标视频窗口内的对象的动作信息时,根据动作信息确定动画信息以及动画运动方向;根据动画运动方向从视频交互界面上的各视频窗口中确定终点视频窗口;及按照动画信息针对终点视频窗口进行动画展示。
在一个实施例中,计算机可读指令被处理器执行时,使得处理器在执行按照动画信息针对终点视频窗口进行动画展示的步骤时,执行以下步骤:根据动画信息对终点视频窗口进行变形。
在一个实施例中,计算机可读指令被处理器执行时,使得处理器在执行按照动画信息针对终点视频窗口进行动画展示的步骤时,执行以下步骤:在终点视频窗口内显示与动画信息对应的图像,并控制图像在终点视频窗口内运动。
在一个实施例中,计算机可读指令被处理器执行时,使得处理器在执行按照动画信息针对终点视频窗口进行动画展示的步骤时,执行以下步骤:根据动画信息在目标视频窗口内显示相应的图像;控制图像按照动画运动方向向终点视频窗口的目标终点位置运动;及在终点视频窗口内确定运动终点位 置,并将目标终点位置更新为运动终点位置。
在一个实施例中,动作信息包括动作类型和动作方向;计算机可读指令被处理器执行时,使得处理器在执行根据动作信息确定动画信息以及动画运动方向的步骤时,执行以下步骤:根据动作类型确定相应的动画信息,根据动作方向确定相应的动画运动方向。
在一个实施例中,计算机可读指令被处理器执行时,使得处理器还执行以下步骤:当检测到目标视频窗口内的对象的动作信息时,对目标视频窗口内对象进行实时运动追踪;及根据追踪到的运动信息更新动画运动方向。
在一个实施例中,动作信息包括动作类型和动作方向;计算机可读指令被处理器执行时,使得处理器在执行控制图像按照动画运动方向向终点视频窗口的目标终点位置运动的步骤时,执行以下步骤:根据动画运动方向对图像进行旋转;及在目标视频窗口内对象的当前动作信息满足预设图像运动触发条件时,控制旋转后的图像按照动画运动方向向目标终点位置进行运动。
在一个实施例中,计算机可读指令被处理器执行时,使得处理器在执行根据动画运动方向从视频交互界面上的各视频窗口中确定终点视频窗口的步骤时,执行以下步骤:确定候选视频窗口的窗口判断区域,候选视频窗口为视频交互界面中除目标视频窗口以外的视频窗口;按照动画运动方向在视频交互界面上绘制相应的直线;确定直线优先接触的目标窗口判断区域;及将目标窗口判断区域对应的候选视频窗口作为终点视频窗口。
在一个实施例中,计算机可读指令被处理器执行时,使得处理器在执行在终点视频窗口内确定运动终点位置的步骤时,执行以下步骤:对终点视频窗口内的对象进行动作检测;及当检测到终点视频窗口内对象的动作信息满足预设图像接收条件时,根据终点视频窗口内对象的动作信息,在目标视频窗口内确定图像运动的运动终点位置。
在一个实施例中,计算机可读指令被处理器执行时,使得处理器在执行在终点视频窗口内确定运动终点位置的步骤时,执行以下步骤:当按照动画运动方向在视频交互界面上绘制的直线不与任何窗口判断区域接触的情况下,确定图像运动的运动终点位置为目标视频窗口的边界位置。
在一个实施例中,计算机可读指令被处理器执行时,使得处理器还执行 以下步骤:当图像运动到目标终点位置时,控制图像在目标终点位置停留预设时长;及在图像停留期间,根据终点视频窗口内对象的当前动作信息控制图像进行相应的移动。
上述计算机可读存储介质,可以基于视频窗口内对象的动作信息在视频交互界面上实现跨窗口动画;对于多人视频交互的每一交互方来说,各视频交互方终端只需进行同样的操作,也即视频窗口内对象的动作信息检测以及基于检测结果实现相应的跨窗口动画,便可以在多人视频聊天或多人视频直播中实现动画效果同步,无需将具有动画效果的视频画面传输给其他视频聊天或者视频直播用户终端,因此,可以节省网络资源。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:只读存储器(ROM,Read Only Memory)、随机存取记忆体(RAM,Random Access Memory)、磁盘或光盘等。
以上对本发明实施例所提供的一种动画实现方法、装置和存储介质进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (20)

  1. 一种动画实现方法,由终端执行,所述终端包括存储器和处理器,所述方法包括:
    对视频交互界面上各视频窗口内的对象进行动作检测;
    当检测到目标视频窗口内的对象的动作信息时,根据所述动作信息确定动画信息以及动画运动方向;
    根据所述动画运动方向从所述视频交互界面上的各视频窗口中确定终点视频窗口;及
    按照所述动画信息针对所述终点视频窗口进行动画展示。
  2. 如权利要求1所述的动画实现方法,其特征在于,所述按照所述动画信息针对所述终点视频窗口进行动画展示,包括:
    根据所述动画信息对所述终点视频窗口进行变形。
  3. 如权利要求1所述的动画实现方法,其特征在于,所述按照所述动画信息针对所述终点视频窗口进行动画展示,包括:
    在所述终点视频窗口内显示与所述动画信息对应的图像,并控制所述图像在所述终点视频窗口内运动。
  4. 如权利要求1所述的动画实现方法,其特征在于,所述按照所述动画信息针对所述终点视频窗口进行动画展示,包括:
    根据所述动画信息在所述目标视频窗口内显示相应的图像;
    控制所述图像按照所述动画运动方向向所述终点视频窗口的目标终点位置运动;及
    在所述终点视频窗口内确定运动终点位置,并将所述目标终点位置更新为运动终点位置。
  5. 如权利要求1所述的动画实现方法,其特征在于,所述动作信息包括动作类型和动作方向;
    所述根据所述动作信息确定动画信息以及动画运动方向,包括:
    根据所述动作类型确定相应的动画信息,根据所述动作方向确定相应的动画运动方向。
  6. 如权利要求4所述的动画实现方法,其特征在于,所述动画实现方法 还包括:
    当检测到目标视频窗口内的对象的动作信息时,对所述目标视频窗口内对象进行实时运动追踪;及
    根据追踪到的运动信息更新所述动画运动方向。
  7. 如权利要求4所述的动画实现方法,其特征在于,所述控制所述图像按照所述动画运动方向向所述终点视频窗口的目标终点位置运动,包括:
    根据所述动画运动方向对所述图像进行旋转;及
    在目标视频窗口内对象的当前动作信息满足预设图像运动触发条件时,控制旋转后的图像按照所述动画运动方向向目标终点位置进行运动。
  8. 如权利要求1-7任一项所述的动画实现方法,其特征在于,所述根据所述动画运动方向从所述视频交互界面上的各视频窗口中确定终点视频窗口,包括:
    确定候选视频窗口的窗口判断区域,所述候选视频窗口为所述视频交互界面中除所述目标视频窗口以外的视频窗口;
    按照所述动画运动方向在所述视频交互界面上绘制相应的直线;
    确定所述直线优先接触的目标窗口判断区域;及
    将所述目标窗口判断区域对应的候选视频窗口作为终点视频窗口。
  9. 如权利要求4所述的动画实现方法,其特征在于,所述在所述终点视频窗口内确定运动终点位置,包括:
    对所述终点视频窗口内的对象进行动作检测;及
    当检测到所述终点视频窗口内对象的动作信息满足预设图像接收条件时,根据所述终点视频窗口内对象的动作信息,在所述目标视频窗口内确定所述图像运动的运动终点位置。
  10. 如权利要求9所述的动画实现方法,其特征在于,所述在所述终点视频窗口内确定运动终点位置还包括:
    当按照所述动画运动方向在所述视频交互界面上绘制的直线不与任何窗口判断区域接触的情况下,确定所述图像运动的运动终点位置为所述目标视频窗口的边界位置。
  11. 如权利要求4所述的动画实现方法,其特征在于,所述方法还包括:
    当所述图像运动到目标终点位置时,控制所述图像在所述目标终点位置停留预设时长;及
    在所述图像停留期间,根据所述终点视频窗口内对象的当前动作信息控制所述图像进行相应的移动。
  12. 一种终端,包括存储器和处理器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行以下步骤:
    对视频交互界面上各视频窗口内的对象进行动作检测;
    当检测到目标视频窗口内的对象的动作信息时,根据所述动作信息确定动画信息以及动画运动方向;
    根据所述动画运动方向从所述视频交互界面上的各视频窗口中确定终点视频窗口;及
    按照所述动画信息针对所述终点视频窗口进行动画展示。
  13. 根据权利要求12所述的终端,其特征在于,所述计算机可读指令被所述处理器执行时,使得所述处理器在执行按照所述动画信息针对所述终点视频窗口进行动画展示的步骤时,执行以下步骤:
    根据所述动画信息对所述终点视频窗口进行变形。
  14. 根据权利要求12所述的终端,其特征在于,所述计算机可读指令被所述处理器执行时,使得所述处理器在执行按照所述动画信息针对所述终点视频窗口进行动画展示的步骤时,执行以下步骤:
    在所述终点视频窗口内显示与所述动画信息对应的图像,并控制所述图像在所述终点视频窗口内运动。
  15. 根据权利要求12所述的终端,其特征在于,所述计算机可读指令被所述处理器执行时,使得所述处理器在执行按照所述动画信息针对所述终点视频窗口进行动画展示的步骤时,执行以下步骤:
    根据所述动画信息在所述目标视频窗口内显示相应的图像;
    控制所述图像按照所述动画运动方向向所述终点视频窗口的目标终点位置运动;及
    在所述终点视频窗口内确定运动终点位置,并将所述目标终点位置更新为运动终点位置。
  16. 根据权利要求12所述的终端,其特征在于,所述动作信息包括动作类型和动作方向;所述计算机可读指令被所述处理器执行时,使得所述处理器在执行根据所述动作信息确定动画信息以及动画运动方向的步骤时,执行以下步骤:
    根据所述动作类型确定相应的动画信息,根据所述动作方向确定相应的动画运动方向。
  17. 一种非易失性的计算机可读存储介质,存储有计算机可读指令,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:
    对视频交互界面上各视频窗口内的对象进行动作检测;
    当检测到目标视频窗口内的对象的动作信息时,根据所述动作信息确定动画信息以及动画运动方向;
    根据所述动画运动方向从所述视频交互界面上的各视频窗口中确定终点视频窗口;及
    按照所述动画信息针对所述终点视频窗口进行动画展示。
  18. 根据权利要求17所述的计算机可读存储介质,其特征在于,所述计算机可读指令被所述处理器执行时,使得所述处理器在执行按照所述动画信息针对所述终点视频窗口进行动画展示的步骤时,执行以下步骤:
    根据所述动画信息对所述终点视频窗口进行变形。
  19. 根据权利要求17所述的计算机可读存储介质,其特征在于,所述计算机可读指令被所述处理器执行时,使得所述处理器在执行按照所述动画信息针对所述终点视频窗口进行动画展示的步骤时,执行以下步骤:
    在所述终点视频窗口内显示与所述动画信息对应的图像,并控制所述图像在所述终点视频窗口内运动。
  20. 根据权利要求17所述的计算机可读存储介质,其特征在于,所述计算机可读指令被所述处理器执行时,使得所述处理器在执行按照所述动画信 息针对所述终点视频窗口进行动画展示的步骤时,执行以下步骤:
    根据所述动画信息在所述目标视频窗口内显示相应的图像;
    控制所述图像按照所述动画运动方向向所述终点视频窗口的目标终点位置运动;及
    在所述终点视频窗口内确定运动终点位置,并将所述目标终点位置更新为运动终点位置。
PCT/CN2018/117278 2017-12-14 2018-11-23 动画实现方法、终端及存储介质 WO2019114528A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP18888608.9A EP3726843B1 (en) 2017-12-14 2018-11-23 Animation implementation method, terminal and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711339423.6 2017-12-14
CN201711339423.6A CN109963187B (zh) 2017-12-14 2017-12-14 一种动画实现方法和装置

Publications (1)

Publication Number Publication Date
WO2019114528A1 true WO2019114528A1 (zh) 2019-06-20

Family

ID=66818938

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/117278 WO2019114528A1 (zh) 2017-12-14 2018-11-23 动画实现方法、终端及存储介质

Country Status (3)

Country Link
EP (1) EP3726843B1 (zh)
CN (1) CN109963187B (zh)
WO (1) WO2019114528A1 (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112887631B (zh) * 2019-11-29 2022-08-12 北京字节跳动网络技术有限公司 在视频中显示对象的方法、装置、电子设备及计算机可读存储介质
CN113038149A (zh) 2019-12-09 2021-06-25 上海幻电信息科技有限公司 直播视频互动方法、装置以及计算机设备
CN111107280B (zh) * 2019-12-12 2022-09-06 北京字节跳动网络技术有限公司 特效的处理方法、装置、电子设备及存储介质
CN111918090B (zh) * 2020-08-10 2023-03-28 广州繁星互娱信息科技有限公司 直播画面显示方法、装置、终端及存储介质
CN113011259A (zh) * 2021-02-09 2021-06-22 苏州臻迪智能科技有限公司 电子设备的操作方法
CN115454313A (zh) * 2021-06-09 2022-12-09 脸萌有限公司 触碰动画显示方法、装置、设备及介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103473027A (zh) * 2013-09-16 2013-12-25 张智锋 一种通讯终端分屏多任务交互方法及通讯终端
US20140184912A1 (en) * 2011-11-16 2014-07-03 Stmicroelectronics Pvt Ltd. Video window detection
CN106165320A (zh) * 2014-03-13 2016-11-23 谷歌公司 画中画视频聊天
CN106415667A (zh) * 2014-04-25 2017-02-15 索尼互动娱乐美国有限责任公司 具有增强的深度效果的计算机图形

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7039676B1 (en) * 2000-10-31 2006-05-02 International Business Machines Corporation Using video image analysis to automatically transmit gestures over a network in a chat or instant messaging session
WO2007134115A2 (en) * 2006-05-09 2007-11-22 Disney Enterprises, Inc. Interactive animation
US10642364B2 (en) * 2009-04-02 2020-05-05 Oblong Industries, Inc. Processing tracking and recognition data in gestural recognition systems
CN102447873A (zh) * 2010-10-13 2012-05-09 张明 哈哈视频网络视频聊天娱乐辅助系统
CN103020648B (zh) * 2013-01-09 2016-04-13 艾迪普(北京)文化科技股份有限公司 一种动作类型识别方法、节目播出方法及装置
US9454840B2 (en) * 2013-12-13 2016-09-27 Blake Caldwell System and method for interactive animations for enhanced and personalized video communications
CN107124664A (zh) * 2017-05-25 2017-09-01 百度在线网络技术(北京)有限公司 应用于视频直播的交互方法和装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140184912A1 (en) * 2011-11-16 2014-07-03 Stmicroelectronics Pvt Ltd. Video window detection
CN103473027A (zh) * 2013-09-16 2013-12-25 张智锋 一种通讯终端分屏多任务交互方法及通讯终端
CN106165320A (zh) * 2014-03-13 2016-11-23 谷歌公司 画中画视频聊天
CN106415667A (zh) * 2014-04-25 2017-02-15 索尼互动娱乐美国有限责任公司 具有增强的深度效果的计算机图形

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3726843A4

Also Published As

Publication number Publication date
EP3726843B1 (en) 2022-12-28
CN109963187A (zh) 2019-07-02
CN109963187B (zh) 2021-08-31
EP3726843A4 (en) 2020-10-21
EP3726843A1 (en) 2020-10-21

Similar Documents

Publication Publication Date Title
WO2019114528A1 (zh) 动画实现方法、终端及存储介质
KR102319206B1 (ko) 정보 처리 방법 및 장치 그리고 서버
TWI683578B (zh) 視頻通信的方法、裝置、終端及電腦可讀儲存介質
US9952820B2 (en) Augmented reality representations across multiple devices
WO2019029406A1 (zh) 表情展示方法、装置、计算机可读存储介质及终端
CN111078168A (zh) 一种信息处理方法、第一电子设备和存储介质
US20230017694A1 (en) Method and apparatus for controlling interface display, device, and storage medium
WO2018103633A1 (zh) 一种图像处理的方法及装置
WO2022183707A1 (zh) 互动方法及其装置
WO2023138192A1 (zh) 控制虚拟对象拾取虚拟道具的方法、终端及存储介质
CN113168281A (zh) 程序、电子装置和方法
CN114419230A (zh) 一种图像渲染方法、装置、电子设备及存储介质
CN113426124A (zh) 游戏中的显示控制方法、装置、存储介质及计算机设备
CN109656463B (zh) 个性表情的生成方法、装置及系统
CN117085314A (zh) 云游戏的辅助操控方法、装置和存储介质及电子设备
CN112035083A (zh) 车窗显示的方法及装置
CN115999153A (zh) 虚拟角色的控制方法、装置、存储介质及终端设备
CN115193064A (zh) 虚拟对象的控制方法、装置、存储介质及计算机设备
CN113975802A (zh) 游戏控制方法、装置、存储介质与电子设备
CN113426115A (zh) 游戏角色的展示方法、装置和终端
KR20210081935A (ko) 제스처를 이용하여 대화 메시지에 감정을 표현하는 방법, 시스템, 및 컴퓨터 프로그램
WO2023226569A1 (zh) 虚拟场景中的消息处理方法、装置、电子设备及计算机可读存储介质及计算机程序产品
JP7162737B2 (ja) コンピュータプログラム、サーバ装置、端末装置、システム及び方法
CN117915158A (zh) 直播间互动控制方法及装置、电子设备和存储介质
CN116059639A (zh) 虚拟对象控制方法、装置、电子设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18888608

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018888608

Country of ref document: EP

Effective date: 20200714