WO2021160108A1 - 一种动画视频处理方法、装置、电子设备及存储介质 - Google Patents

一种动画视频处理方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2021160108A1
WO2021160108A1 PCT/CN2021/076159 CN2021076159W WO2021160108A1 WO 2021160108 A1 WO2021160108 A1 WO 2021160108A1 CN 2021076159 W CN2021076159 W CN 2021076159W WO 2021160108 A1 WO2021160108 A1 WO 2021160108A1
Authority
WO
WIPO (PCT)
Prior art keywords
target object
animation
animation video
video
motion
Prior art date
Application number
PCT/CN2021/076159
Other languages
English (en)
French (fr)
Inventor
张天翔
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2021160108A1 publication Critical patent/WO2021160108A1/zh
Priority to US17/687,008 priority Critical patent/US11836841B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/65Methods for processing data by generating or executing the game program for computing the condition of a game character
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8005Athletics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2213/00Indexing scheme for animation
    • G06T2213/08Animation software package

Definitions

  • the present disclosure relates to information processing technology, in particular to animation video processing methods, devices, electronic equipment and storage media
  • AI Artificial Intelligence role refers to a game role that is controlled by a computer (not a user) in a game scene.
  • MMO RPG massively multiplayer online role-playing games
  • AI roles are more common.
  • the AI character can move freely in the game scene like the game character controlled by the user, and supports functions such as animation and sound effects. Its position can also be accurately synchronized by the server to each client.
  • the behavior logic of the AI character is controlled by the behavior tree, and AI planning can configure its behavior.
  • the animation state machine has poor scalability when facing complex motion behaviors.
  • the Motion Matching algorithm requires the recording of a large number of motions in order to cover various motion behaviors of the character.
  • the captured data is used as a data basis to ensure that relatively close animation clips can be found regardless of the state of motion. This process takes up a lot of system overhead and verifies that it affects the user experience.
  • the embodiments of the present disclosure provide an animation video processing method, device, electronic equipment, and storage medium.
  • the technical solutions of the embodiments of the present disclosure are implemented as follows:
  • the present disclosure provides an animation video processing method, the method includes:
  • an animation video matching the real-time motion state of the target object is obtained through the displacement parameter of the target object.
  • the embodiment of the present disclosure also provides an animation video processing device, wherein the device includes:
  • An information transmission module configured to determine an original animation video matching the target object, wherein the original animation video is used to characterize the motion state of the target object in different usage scenarios;
  • An information processing module configured to preprocess the original animation video to obtain key video frames in the original animation video and motion data corresponding to the key video frames;
  • the information processing module is configured to determine a set of motion data matching the target object according to the motion data corresponding to the key video frame;
  • the information processing module is configured to determine the displacement parameter of the target object based on the real-time motion state of the target object;
  • the information processing module is configured to obtain an animation video matching the real-time motion state of the target object through the displacement parameter of the target object based on the motion data set matching the target object.
  • the information processing module is configured to determine the animation video output environment corresponding to the target object
  • the information processing module is configured to determine the motion state of the target object in different usage scenarios according to the animation video output environment
  • the information processing module is configured to dynamically capture the motion of the captured object according to the motion state of the target object in different usage scenarios to form an original animation video that matches the target object.
  • the information processing module is configured to detect the positions of the limbs of the target object in all video frames in the original animation video;
  • the information processing module is configured to when the position of the limb of the target object is located in a corresponding horizontal plane, or,
  • the information processing module is configured to determine the displacement parameters of the target object in different usage scenarios based on the key video frame as the motion data corresponding to the key video frame.
  • the information processing module is configured to determine the speed of the left lower limb of the target object and the speed of the right lower limb of the target object when the limbs of the target object are the lower left limb and the lower right limb of the target object;
  • the information processing module is configured to determine that the position of the left lower extremity of the target object is located on a corresponding horizontal plane when the difference between the speed of the left lower extremity of the target object and the speed of the right lower extremity of the target object reaches a negative extreme value middle;
  • the information processing module is configured to determine that the position of the right lower extremity of the target object is located on a corresponding horizontal plane when the difference between the speed of the left lower extremity of the target object and the speed of the right lower extremity of the target object reaches a positive extreme value middle.
  • the information processing module is configured to determine the speed of the upper left limb of the target object and the speed of the upper right limb of the target object when the limbs of the target object are the upper left limb and the upper right limb of the target object;
  • the information processing module is configured to determine the position of the left upper limb of the target object and the corresponding reference when the difference between the speed of the left upper limb of the target object and the speed of the right upper limb of the target object reaches a negative extreme value Object contact
  • the information processing module is configured to determine the position of the right upper limb of the target object and the corresponding reference when the difference between the speed of the left upper limb of the target object and the speed of the right upper limb of the target object reaches a positive extreme value ⁇ contact.
  • the information processing module is configured to determine the movement path of the target object based on the pathfinding algorithm process
  • the information processing module is configured to determine the maximum displacement parameter matched by the target object and the corresponding maximum plus displacement parameter according to the motion data set matched by the target object;
  • the information processing module is configured to determine the displacement parameters of the target object at different moments according to the movement path of the target object, the maximum displacement parameter matched by the target object, and the corresponding maximum plus displacement parameter.
  • the information processing module is configured to determine a first motion vector corresponding to the current motion state of the target object based on the displacement parameter of the target object;
  • the information processing module is configured to determine a second motion vector corresponding to each key video frame based on a set of motion data matching the target object;
  • the information processing module is configured to determine a second motion vector matching the first motion vector in a search binary tree structure corresponding to the second motion vector according to the first motion vector;
  • the information processing module is configured to determine a corresponding key video frame according to a second motion vector matching the first motion vector, and obtain the real-time motion state of the target object through the determined key video frame Match the animated video.
  • the information processing module is configured to search for a binary tree structure through the right lower limb corresponding to the second motion vector when the first motion vector characterizes the position of the left lower limb of the target object in a corresponding horizontal plane, and determine the The second motion vector that matches the first motion vector, or,
  • the information processing module is configured to search for a binary tree structure through the left lower extremity corresponding to the second motion vector when the first motion vector characterizing the position of the lower right limb of the target object is located in a corresponding horizontal plane, and determine The second motion vector that matches the first motion vector.
  • the information processing module is configured to determine different animation videos to be output according to the key video frames
  • the information processing module is configured to determine that in the different animation videos to be output, the target object's limb position and the current limb position of the target object have the smallest distance between the to-be-output animation video and the target object. Animated video that matches the real-time motion state.
  • the information processing module is configured to obtain a target resolution corresponding to the animation video output environment
  • the information processing module is configured to, based on the target resolution, perform resolution enhancement processing on the animation video matching the real-time motion state of the target object, so as to achieve the same as the real-time motion state of the target object.
  • the matched animation video matches the animation video output environment.
  • the embodiments of the present disclosure also provide an electronic device, which includes:
  • Memory used to store executable instructions
  • the processor is used to implement the preceding animation video processing method when running the executable instructions stored in the memory.
  • the embodiment of the present disclosure also provides a computer-readable storage medium that stores executable instructions, and the executable instructions are executed by a processor to implement a pre-procedure animation video processing method.
  • the technical solution shown in the embodiment of the present disclosure determines the original animation video that matches the target object, where the original animation video is used to characterize the motion state of the target object in different usage scenarios; Perform preprocessing to obtain the key video frame in the original animation video and the motion data corresponding to the key video frame; determine the motion data set matching the target object according to the motion data corresponding to the key video frame Based on the real-time motion state of the target object, determine the displacement parameter of the target object; based on the motion data set matching the target object, obtain the real-time displacement parameter of the target object through the displacement parameter of the target object
  • the animation video matching the motion state can accurately and efficiently obtain the animation video matching the real-time motion state of the target object in the original animation video. Compared with the traditional technology, it guarantees the information processing ability of the user's electronic equipment. Under the same condition, the number of supported AI characters and the corresponding animation quality have been greatly improved, effectively improving the user experience.
  • FIG. 1 is a schematic diagram of a usage scenario of an animation video processing method provided by an embodiment of the disclosure
  • FIG. 2 is a schematic diagram of the structure of an electronic device provided by an embodiment of the disclosure.
  • FIG. 3 is a schematic diagram of an optional flow of an animation video processing method provided by an embodiment of the present disclosure
  • FIG. 4 is a schematic diagram of an optional flow of an animation video processing method provided by an embodiment of the present disclosure
  • FIG. 5 is a schematic diagram of an optional flow of an animation video processing method provided by an embodiment of the present disclosure
  • FIG. 6 is a schematic diagram of a front-end display of an animation video processing method provided by an embodiment of the present disclosure
  • FIG. 7 is a schematic diagram of a front-end display of an animation video processing method provided by an embodiment of the present disclosure.
  • FIG. 8 is a schematic diagram of an optional flow of an animation video processing method provided by an embodiment of the present disclosure.
  • FIG. 9 is a schematic diagram of a display effect of an animation video processing method provided by an embodiment of the disclosure.
  • FIG. 10A is a schematic diagram of an optional flow of an animation video processing method provided by an embodiment of the present disclosure.
  • FIG. 10B is a schematic diagram of an optional flow chart of the animation video processing method provided by an embodiment of the present disclosure.
  • FIG. 11 is a schematic diagram of a display effect of an animation video processing method provided by an embodiment of the disclosure.
  • FIG. 12 is a schematic diagram of an optional processing process of the animation video processing method provided by an embodiment of the disclosure.
  • Terminals including but not limited to: ordinary terminals and dedicated terminals, wherein the ordinary terminals maintain a long connection and/or a short connection with the transmission channel, and the dedicated terminal maintains a long connection with the transmission channel.
  • a carrier that implements a specific function in a terminal for example, a mobile client (APP) is a carrier of a specific function in a mobile terminal, such as a function of performing payment and consumption or purchasing a wealth management product.
  • APP mobile client
  • Virtual environment the virtual environment displayed (or provided) when the application program is running on the terminal.
  • the virtual environment may be a simulation environment of the real world, a semi-simulated and semi-fictional three-dimensional environment, or a purely fictitious three-dimensional environment.
  • the virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment, and a three-dimensional virtual environment.
  • the following embodiments take the virtual environment as a three-dimensional virtual environment as an example, but are not limited thereto.
  • the virtual environment is also used for a virtual environment battle between at least two virtual virtual objects.
  • the virtual environment is also used for battles between at least two virtual virtual objects using virtual firearms.
  • the virtual environment is also used to use virtual firearms for battle between at least two virtual objects within the range of the target area, and the range of the target area will continue to decrease with the passage of time in the virtual environment.
  • Virtual props refer to virtual weapons that launch bullets in a virtual environment, or virtual bows and arrows or virtual slingshots that launch clusters of arrows.
  • the virtual objects can pick up virtual firearms in the virtual environment, and the virtual firearms can be obtained by picking up. Conduct an attack.
  • the virtual object may be a user virtual object that is controlled by an operation on the client, or it may be an artificial intelligence (AI) set in a virtual scene battle through training, or it may be set in The non-user virtual object (Non-Player Character, NPC) in the virtual scene interaction.
  • the virtual object may be a virtual character competing in a virtual scene.
  • the number of virtual objects participating in the interaction in the virtual scene may be preset, or may be dynamically determined according to the number of clients participating in the interaction.
  • the user can control the virtual object to free fall, glide, or open a parachute to fall in the sky of the virtual scene, run, jump, crawl, bend forward, etc. on the land, etc.
  • the virtual object can be controlled to swim, float or dive in the ocean.
  • the user can also control the virtual object to take a virtual vehicle to move in the virtual scene.
  • the virtual vehicle can be a virtual car, a virtual aircraft, or a virtual vehicle.
  • the above scenarios are only used for illustration here, and the embodiments of the present disclosure do not specifically limit this.
  • the user can also control the virtual object to interact with other virtual objects through virtual weapons such as battles.
  • the virtual weapons can be cold weapons or hot weapons.
  • the present disclosure does not specifically limit the types of virtual weapons.
  • AI characters non-player characters in the game, such as characters in the game controlled by enemies (which can be machine-controlled) or teammates.
  • Animation state machine a technical means to drive character animation performance with different states and transitions between them.
  • Locomotion includes basic motion behaviors such as walking, running, and turning of the game target object (game character).
  • Motion capture technology The motion state of real game characters is recorded by sensors and converted into animation data.
  • Motion Matching A technology that uses a large amount of motion capture data to drive character animation.
  • Sync Point Mark the synchronization point on the animation (usually the moment when the left/right foot falls to the ground) to ensure that the positions of the feet are roughly the same when the two animations are switched.
  • KD Tree A tree-like data structure of a binary tree, which can be used to quickly find the nearest neighbor of a specified coordinate in a large amount of data.
  • Euclidean distance A measure of the distance between two coordinates. The calculation method is to square the coordinate difference of each space dimension and then add the root sign, which corresponds to the physics between two points in the three-dimensional space. distance.
  • the methods provided in this disclosure can be applied to virtual reality applications, three-dimensional map programs, military simulation programs, first-person shooting games (FPS), and multiplayer online battle arena games. , MOBA), etc.
  • the following embodiments are examples of applications in games.
  • Games based on virtual environments are often composed of one or more maps of the game world.
  • the virtual environment in the game simulates the scene of the real world. Users can manipulate virtual objects in the game to walk, run, jump, shoot, and fight in the virtual environment. , Driving, switching to using virtual weapons, using virtual weapons to attack other virtual objects and other actions, with strong interaction, and multiple users can team up for competitive games online.
  • the user controls the virtual object and uses the virtual weapon to attack the target virtual object the user needs to move (such as running or climbing) according to the location of the target virtual object.
  • the AI character in the same game also needs to move in the game interface.
  • a first-person shooting game refers to a shooting game that a user can play from a first-person perspective
  • the screen of the virtual environment in the game is a screen that observes the virtual environment from the perspective of the first virtual object.
  • at least two virtual objects play a single-game battle mode in the virtual environment.
  • the virtual objects avoid attacks initiated by other virtual objects and the dangers in the virtual environment (such as gas circles, swamps, etc.) to achieve the virtual environment.
  • the purpose of survival in the environment when the life value of the virtual object in the virtual environment is zero, the life of the virtual object in the virtual environment ends, and the virtual object that finally survives in the virtual environment is the winner.
  • the battle starts with the moment when the first client joins the battle, and the moment when the last client exits the battle as the end time.
  • Each client can control one or more virtual objects in the virtual environment.
  • the competition mode of the battle may include a single-player battle mode, a two-player team battle mode, or a multi-player team battle mode, and the embodiment of the present disclosure does not limit the battle mode.
  • FIG. 1 is a schematic diagram of a usage scenario of the animation video processing method provided by an embodiment of the disclosure.
  • a terminal including the terminal 10-1 and the terminal 10-2 is provided with a client capable of displaying corresponding animation video processing software.
  • a client capable of displaying corresponding animation video processing software.
  • users can obtain animation video processing and display through the corresponding clients, and trigger the corresponding animation video processing process during the running of the game process (for example, running in different sports routes or Climbing); the terminal is connected to the server 200 through the network 300.
  • the network 300 may be a wide area network or a local area network, or a combination of the two, and uses a wireless link to realize data transmission.
  • the server 200 is configured to deploy the animation video processing device to implement the animation video processing method provided in the present disclosure, so as to determine the original animation video matching the target object, wherein the original animation video is used to characterize The motion state of the target object in different usage scenarios; preprocessing the original animation video to obtain the key video frames in the original animation video and the motion data corresponding to the key video frames; according to the key The motion data corresponding to the video frame determines the motion data set that matches the target object; determines the displacement parameter of the target object based on the real-time motion state of the target object; and determines the displacement parameter of the target object based on the motion data that matches the target object
  • the collection, through the displacement parameter of the target object obtains an animation video matching the real-time motion state of the target object.
  • the animation video processing device can be applied to different game environments including but not limited to virtual reality applications, three-dimensional map programs, military simulation programs, first-person shooting games (FPS), and more.
  • Human online tactical competitive games Multiplayer Online Battle Arena Games, MOBA), etc.
  • UI User Interface
  • the user's motion data in the current display interface can also be called by other applications.
  • the animation video processing device to process different animation videos to obtain an animation video matching the real-time motion state of the target object, it specifically includes: determining the original animation video matching the target object, wherein the original animation video It is used to characterize the motion state of the target object in different usage scenarios; preprocess the original animation video to obtain the key video frames in the original animation video and the motion data corresponding to the key video frames; according to The motion data corresponding to the key video frame determines the motion data set matching the target object; determines the displacement parameter of the target object based on the real-time motion state of the target object; and determines the displacement parameter of the target object based on the match with the target object
  • the motion data set of the target object is used to obtain an animation video matching the real-time motion state of the target object through the displacement parameter of the target object.
  • the animation video processing device can be implemented in various forms, such as a dedicated terminal with the processing function of the animation video processing device, or it can be provided with an animation video processing device for processing.
  • a functional electronic device mobile phone or tablet computer
  • FIG 2 is a schematic diagram of the structure of the electronic equipment provided by the embodiments of the present disclosure. It can be understood that Figure 2 only shows the exemplary structure of the animation video processing device, but not the entire structure. Part of the structure or all of the structure shown in Figure 2 can be implemented as required. structure.
  • the animation video processing apparatus includes: at least one processor 201, a memory 202, a user interface 203, and at least one network interface 204.
  • the various components in the animation video processing device are coupled together through the bus system 205.
  • the bus system 205 is used to implement connection and communication between these components.
  • the bus system 205 also includes a power bus, a control bus, and a status signal bus.
  • various buses are marked as the bus system 205 in FIG. 2.
  • the user interface 203 may include a display, a keyboard, a mouse, a trackball, a click wheel, keys, buttons, a touch panel, or a touch screen.
  • the memory 202 may be a volatile memory or a non-volatile memory, and may also include both volatile and non-volatile memory.
  • the memory 202 in the embodiment of the present disclosure can store data to support the operation of the terminal (such as 10-1). Examples of these data include: any computer program used to operate on the terminal (such as 10-1), such as an operating system and application programs.
  • the operating system contains various system programs, such as a framework layer, a core library layer, and a driver layer, which are used to implement various basic services and process hardware-based tasks.
  • Applications can include various applications.
  • the animation video processing device provided in the embodiments of the present disclosure may be implemented in a combination of software and hardware.
  • the animation video processing device provided in the embodiments of the present disclosure may be a processor in the form of a hardware decoding processor. , Which is programmed to execute the animation video processing method provided by the embodiments of the present disclosure.
  • a processor in the form of a hardware decoding processor may adopt one or more application specific integrated circuits (ASIC, Application Specific Integrated Circuit), DSP, programmable logic device (PLD, Programmable Logic Device), and complex programmable logic device (CPLD, Complex Programmable Logic Device, Field-Programmable Gate Array (FPGA, Field-Programmable Gate Array) or other electronic components.
  • ASIC application specific integrated circuits
  • DSP digital signal processor
  • PLD programmable logic device
  • CPLD Complex Programmable Logic Device
  • FPGA Field-Programmable Gate Array
  • the animation video processing device provided by the embodiments of the present disclosure may be directly embodied as a combination of software modules executed by the processor 201, and the software modules may be located in a storage medium.
  • the storage medium is located in the memory 202, and the processor 201 reads the executable instructions included in the software module in the memory 202, and combines necessary hardware (for example, including the processor 201 and other components connected to the bus 205) to complete the embodiments of the present disclosure.
  • Animation video processing method is located in the memory 202, and the processor 201 reads the executable instructions included in the software module in the memory 202, and combines necessary hardware (for example, including the processor 201 and other components connected to the bus 205) to complete the embodiments of the present disclosure.
  • the processor 201 may be an integrated circuit chip with signal processing capabilities, such as a general-purpose processor, a digital signal processor (DSP, Digital Signal Processor), or other programmable logic devices, discrete gates, or transistor logic devices , Discrete hardware components, etc., where the general-purpose processor may be a microprocessor or any conventional processor.
  • DSP Digital Signal Processor
  • the general-purpose processor may be a microprocessor or any conventional processor.
  • the device provided by the embodiment of the present disclosure can directly use the processor 201 in the form of a hardware decoding processor to complete the execution, for example, by one or more applications.
  • Application Specific Integrated Circuit ASIC, Application Specific Integrated Circuit
  • DSP Programmable Logic Device
  • PLD Programmable Logic Device
  • CPLD Complex Programmable Logic Device
  • FPGA Field- Programmable Gate Array
  • FPGA Field- Programmable Gate Array
  • the memory 202 in the embodiment of the present disclosure is used to store various types of data to support the operation of the animation video processing device. Examples of these data include: any executable instructions for operating on the animation video processing device, such as executable instructions, and a program that implements the animation video processing method of the embodiments of the present disclosure may be included in the executable instructions.
  • the animation video processing device provided in the embodiments of the present disclosure can be implemented in software.
  • FIG. 2 shows the animation video processing device stored in the memory 202, which can be software in the form of programs and plug-ins. It also includes a series of modules.
  • the program stored in the memory 202 it may include an animation video processing device.
  • the animation video processing device includes the following software modules: an information transmission module 2081 and an information processing module 2082.
  • the information transmission module 2081 determines the original animation video matching the target object, where the original animation video is used to characterize the motion state of the target object in different usage scenarios;
  • the information processing module 2082 is configured to preprocess the original animation video, and obtain key video frames in the original animation video and motion data corresponding to the key video frames;
  • the information processing module 2082 is configured to determine a set of motion data matching the target object according to the motion data corresponding to the key video frame;
  • the information processing module 2082 is configured to determine the displacement parameter of the target object based on the real-time motion state of the target object;
  • the information processing module 2082 is configured to obtain an animation video matching the real-time motion state of the target object through the displacement parameter of the target object based on the motion data set matching the target object.
  • FIG. 3 is an optional flowchart of the animation video processing method provided by the embodiment of the present disclosure. Understandably, The steps shown in FIG. 3 can be executed by various electronic devices running animation video processing devices, for example, various game devices such as animation video processing devices, where a dedicated terminal with animation video processing devices can be packaged in The terminal 101-1 shown in FIG. 1 can execute the corresponding software modules in the animation video processing device shown in the preceding FIG. 2. The steps shown in FIG. 3 will be described below.
  • Step 301 The animation video processing device determines the original animation video that matches the target object.
  • the original animation video is used to characterize the motion state of the target object in different usage scenarios.
  • the different usage scenarios involved in the embodiments of the present disclosure include but are not limited to: 2D video game scenes, 3D somatosensory game scenes, and virtual reality interaction scenes to be used.
  • the target object may be a movable object in different usage scenarios.
  • the scene can be a 2D usage scene or a 3D usage scene.
  • game scenes refer to virtual scenes created during game matches for game characters to compete in the game, such as virtual houses, virtual islands, virtual maps, and so on.
  • the target object can be a game character in a game scene, such as a game character controlled by a player, or an AI character controlled by a computer.
  • the target object may also be a movable object other than the game character in the game scene, such as any movable object such as monsters, vehicles, ships, and flying objects.
  • determining the original animation video that matches the target object can be achieved in the following ways:
  • the motion action of the capture object is dynamically captured to form an original animation video that matches the target object.
  • the terminal (including the terminal 10-1 and the terminal 10-2) is provided with a client that can display the software of the corresponding AI character, such as the client or plug-in of different games, and the user passes the corresponding
  • the client can acquire AI characters and interact with user-controlled characters and display them, and trigger the corresponding animation video processing process in the process of virtual resource changes (for example, virtual objects can run or attack in a virtual environment);
  • the motion capture data is used to complete the coverage of different motion behaviors of the character, and form the original animation video that matches the target object, which can ensure the integrity of the coverage of the different motion behaviors of the character.
  • Step 302 The animation video processing device preprocesses the original animation video, and obtains key video frames in the original animation video and motion data corresponding to the key video frames.
  • preprocessing the original animation video to obtain the key video frames in the original animation video and the motion data corresponding to the key video frames can be implemented in the following ways:
  • the video frame including the position of the limb of the target object is determined to be a key video frame; based on the key video frame, the displacement parameters of the target object in different usage scenarios are determined as the The motion data corresponding to the key video frame.
  • the AI character moves according to the path pre-calculated by the pathfinding algorithm, and the motion trajectory generated by the pathfinding algorithm is generally a polyline segment instead of an arc.
  • the motion mode of the AI character is relatively simple and can be Split into motion animation videos in different directions, for example, eight directions (front and rear, left and right, left front, right front, left back, right back) starting animation video, starting animation video, running in eight directions
  • the turning animation video and emergency animation video can cover all the motion states of the AI character.
  • FIG. 4 is an optional flowchart of the animation video processing method provided by the embodiment of the present disclosure.
  • the steps shown in Figure 4 can be executed by various electronic devices running animation video processing devices, such as various game devices with animation video processing devices.
  • special terminals with animation video processing devices can be packaged In the terminal 101-1 shown in FIG. 1, to execute the corresponding software modules in the animation video processing device shown in the preceding FIG. 2. The following describes the steps shown in FIG. 4.
  • Step 401 When the limbs of the target object are the lower left limb and the lower right limb of the target object, determine the speed of the left lower limb of the target object and the speed of the right lower limb of the target object.
  • Step 402 When the difference between the speed of the left lower limb of the target object and the speed of the right lower limb of the target object reaches a negative extreme value, it is determined that the position of the left lower limb of the target object is located in a corresponding horizontal plane.
  • Step 403 When the difference between the speed of the left lower extremity of the target object and the speed of the right lower extremity of the target object reaches a positive extreme value, it is determined that the position of the right lower extremity of the target object is located in a corresponding horizontal plane.
  • the left lower limb (left foot) and the right lower limb (right foot) alternately land on the ground.
  • the speed of the supporting foot drops after landing. Is 0 and the other foot has a positive speed, so when the two feet alternately become supporting feet, the speed difference between the two will change between the negative extreme and the positive extreme.
  • the target object When the target object’s When the difference between the speed of the left foot and the speed of the right foot of the target object reaches the negative extreme value, the left foot of the target object has already touched the ground. Therefore, the video frame in the video frame where the left foot of the target object has touched the ground is the key video frame.
  • detecting the position of the limbs of the target object in all video frames in the original animation video can be achieved in the following manner:
  • the limbs of the target object are the upper left limb and the upper right limb of the target object, determine the speed of the upper left limb of the target object and the speed of the right upper limb of the target object;
  • the difference between the speed of the right upper limb of the target object reaches a negative extreme value, it is determined that the position of the upper left limb of the target object is in contact with the corresponding reference object;
  • the speed of the left upper limb of the target object and the right upper limb of the target object are When the difference of the upper limb speed reaches the positive extreme value, it is determined that the position of the right upper limb of the target object is in contact with the corresponding reference object.
  • the left upper limb (left hand) and the right upper limb (right hand) alternately contact with the rock as the reference object, and support during the movement.
  • the hand stays still, while the other hand has a positive speed. Therefore, when the two hands alternately become the support point, the speed difference between the two will change between the negative extreme value and the positive extreme value.
  • the difference between the left hand speed of the target object and the right hand speed of the target object reaches the negative extreme value
  • the left hand of the target object has been in contact with the rock as a support point. Therefore, the left hand of the target object in the video frame has been in contact with the reference object.
  • the video frame is the key video frame.
  • Step 303 The animation video processing device determines a set of motion data matching the target object according to the motion data corresponding to the key video frame.
  • Step 304 The animation video processing device determines the displacement parameter of the target object based on the real-time motion state of the target object.
  • determining the displacement parameter of the target object can be achieved in the following manner:
  • the planned movement path of the target object in the scene refers to the movement path planned by the automatic path finding algorithm according to the start and end positions (including the start and end positions) of the target object.
  • the planned movement path is not necessarily the actual movement path, because when the target object actually moves in the scene, some obstacles (such as static obstacles such as walls, steps, stones, etc.) may be encountered. Such as other objects in the scene, movable objects and other dynamic obstacles), these obstacles will block the target object from moving according to the planned movement path, and the target object will continue to move to the end position after bypassing the obstacle.
  • static obstacles will be avoided when generating the planned movement path, so there are no static obstacles on the planned movement path.
  • the start and end positions of the target object can be determined by the user or the server.
  • the target object is an AI character in a game scene
  • the starting and ending positions of the target object and the planned movement path can be determined by the server.
  • the movement speed of the AI character is usually the same order of magnitude as the movement speed of the game character controlled by the player, and it does not teleport frequently, which means that the AI character is between two frames. The location is close enough.
  • the refresh rate of 60 frames per second and the moving speed of the AI character at 10 meters per second as an example, the position difference of the AI character between the two frames is only about 0.16 m, which is much smaller than the scale of the entire scene.
  • Step 305 The animation video processing device obtains an animation video matching the real-time motion state of the target object through the displacement parameter of the target object based on the motion data set matching the target object.
  • the motion data set may include motion state data of the same AI character in different postures, and may also include motion data of the same AI character in the same motion posture in different virtual environments.
  • the terminal can control the AI character to move or perform certain actions in the land of the virtual interactive scene.
  • You can also control AI characters to move or perform certain actions in virtual environments such as shallows, swamps, and mountain streams.
  • the movement of the AI character on the land can specifically be running, jumping, crawling, bending forward, etc. on the land.
  • the target object can also be determined
  • the speed parameter in the virtual environment is used as the motion data corresponding to the key video frame to improve the user experience of controlling the AI character as the target object to move in different virtual environments.
  • FIG. 5 the steps shown in Figure 5 can be executed by various electronic devices running animation video processing devices, for example, various game devices such as animation video processing devices, where special terminals with animation video processing devices can be packaged In the terminal 101-1 shown in FIG. 1, to execute the corresponding software modules in the animation video processing device shown in the preceding FIG. 2.
  • the steps shown in FIG. 5 are described below.
  • Step 501 Determine a first motion vector corresponding to the current motion state of the target object based on the displacement parameter of the target object.
  • Step 502 Determine a second motion vector corresponding to each key video frame based on the motion data set matching the target object.
  • Step 503 According to the first motion vector, determine a second motion vector matching the first motion vector in the search binary tree structure corresponding to the second motion vector.
  • Step 504 Determine the corresponding key video frame according to the second motion vector matching the first motion vector, and obtain the animation video matching the real-time motion state of the target object through the determined key video frame .
  • the second motion vector matching the first motion vector can be determined by the following Way to achieve:
  • the binary tree structure is searched through the lower right limb corresponding to the second motion vector to determine the one that matches the first motion vector
  • the second motion vector or,
  • the binary tree structure is searched through the left lower limb corresponding to the second motion vector to determine the one that matches the first motion vector The second motion vector.
  • obtaining an animation video matching the real-time motion state of the target object through the determined key video frame can be achieved in the following manner:
  • determine different animation videos to be output determine the animation video to be output with the smallest distance between the position of the limb of the target object and the position of the current limb of the target object in the different animation videos to be output It is an animated video that matches the real-time motion state of the target object.
  • the animation video processing method further includes:
  • the animation video that matches the real-time motion state of the target object matches the animation video output environment. Since the animation video output environment corresponding to the target object is not the same, by performing resolution enhancement processing on the animation video matching the real-time motion state of the target object, the user can watch the motion state of the AI character more suitable for the user's use.
  • the animation video output environment corresponding to the target object is not the same, by performing resolution enhancement processing on the animation video matching the real-time motion state of the target object, the user can watch the motion state of the AI character more suitable for the user's use.
  • the following takes a game scene with an AI character as an example to describe the animation video processing method provided by the embodiment of the present disclosure, where a client that can display software of the corresponding AI character, such as a client or plug-in for different games, is provided.
  • the user can obtain the AI character through the corresponding client, interact with and display the character controlled by the user, and trigger the corresponding animation video processing process during the virtual resource change (for example, the virtual object can run or attack in the virtual environment );
  • the terminal connects to the server through a network, which can be a wide area network or a local area network, or a combination of the two, using wireless links to achieve data transmission.
  • FIG. 6 is a schematic diagram of a front-end display of the animation video processing method provided by an embodiment of the present disclosure
  • FIG. 7 is a schematic diagram of a front-end display of the animation video processing method provided by an embodiment of the disclosure.
  • Figures 6 and 7 respectively show the vivid behaviors of a single AI character when chasing. If an animation state machine is used to achieve this effect, a large number of state nodes are required. In the traditional solution, the animation state is in the process of processing the AI character. The machine has poor scalability when facing complex motion behaviors.
  • the Motion Matching algorithm requires a large amount of motion capture data to be recorded as a data basis to ensure that no matter what kind of motion state, it can be found. Relatively close animation clips, this process takes up a lot of system overhead.
  • the animation state machine needs to define a large number of state nodes in the state machine, and the state transition conditions between these nodes also become extremely complicated.
  • the entire state machine It becomes a complex network structure composed of a large number of states and the transition conditions between them. This not only makes the system overhead at runtime increase, but also makes it extremely difficult to face changes and status additions and deletions, and the maintenance cost is very high.
  • the Matching algorithm requires a large amount of motion capture data to be recorded as a data basis to ensure that relatively close animation clips can be found regardless of the motion state. Therefore, the performance cost of the calculation process of selecting the best one from the huge animation data at runtime is very large, which is not conducive to the large-scale use of AI characters and affects the user experience.
  • an embodiment of the present disclosure provides an animation video processing method, which includes the following steps:
  • Step 801 Preprocessing the animation information, extracting key frames and corresponding motion data.
  • FIG. 10A is an optional flowchart of an animation video processing method provided by an embodiment of the present disclosure, including:
  • Step 1001 Determine the animation requirements of the game process.
  • Step 1002 Perform motion capture.
  • Step 1003 Import the motion capture result into the engine.
  • Step 1004 Calculate the speed and supporting feet of the target object.
  • Step 1005 Split the key frame.
  • Step 1006 Save the corresponding file.
  • Step 1007 Determine the moving state of the target object.
  • Step 1008 Predict the future speed of the target object.
  • Step 1009 Determine the actual animation video match.
  • Step 1010 Output the corresponding animation.
  • the data obtained by the motion capture is divided into small segments and imported into the game engine. After pre-calculation, it is divided into an animation video composed of smaller video frames, and the first frame of the animation video composed of each video frame is used as the key frame Extract the movement state, and build the KD Tree corresponding to the left and right feet.
  • the algorithm finds the animation video composed of the N closest animation video frames in the KD Tree according to the current character's motion state, that is, the six-dimensional vector composed of the current speed, the predicted future speed, and the past speed, and from it Find the animation with the feet closest to the current character's feet as the final output. It is possible to use only extremely low computing overhead to match the system with high-quality locomotion animations. Among them, the computing overhead for 500 AI characters is only 1ms.
  • FIG. 10B is an optional flowchart of the animation video processing method provided by an embodiment of the present disclosure
  • FIG. 11 is a schematic diagram of the display effect of the animation video processing method provided by the embodiment of the present disclosure.
  • the original Motion Matching algorithm uses huge motion capture data to cover different motion behaviors of characters. For AI characters, their motion behavior is simpler and more controllable than user-controlled roles.
  • the AI character moves according to the path pre-calculated by the pathfinding algorithm, and the motion trajectory generated by the pathfinding algorithm is generally a polyline segment rather than an arc Line, so the motion mode of the AI character is relatively simple and can be split into an animation video composed of several video frames. Therefore, for AI characters, especially for AI characters that exceed a certain number (for different use scenarios, the number can be adjusted in time), there is no need to record huge and comprehensive motion capture data like user characters, but only key motion clips.
  • the loop animation of walking and running in a straight line For example, the animation of starting walking and starting in eight directions (front, back, left, front, right, front, left, rear, right), turning animation in eight directions, emergency stop animation, etc.
  • These basic animation clips are sufficient to cover the motion state of AI when moving along the path generated by the pathfinding algorithm.
  • the aforementioned motion capture segments can be recorded as required and imported into the game engine, and the algorithm can preprocess these segments to extract key frames and their motion data.
  • the original Motion Matching algorithm samples all animation data at a high sampling frequency (the preferred value is ten times per second) and cuts out key frames, calculates the corresponding motion state data, so the generated algorithm is used for dynamic matching The number of key frames is very large.
  • the animation video processing method provided in the present disclosure can generate key frames only when the left or right foot is on the ground. It can be achieved that the number of key frames used for dynamic matching is greatly reduced, which reduces the computational overhead at runtime. At the same time, since the current supporting foot information is clear, the next matching range can be reduced accordingly, which can avoid the unnatural phenomenon of landing on the same foot twice in a row.
  • the animation video composed of each video frame is indexed by its starting key frame, and the algorithm will extract its current character speed, a The character speed after seconds, the character speed before b seconds, and the current position and speed of the left and right feet are used as key movement information, and a and b are configurable time parameters. These motion information will be saved as a file for dynamic matching at runtime.
  • Step 802 Determine the real-time animation output according to the preprocessing result of the animation information and the motion state of the AI character.
  • FIG. 12 is a schematic diagram of an optional processing process of the animation video processing method provided by the embodiment of the present disclosure.
  • Three key speeds can be selected as the main matching basis, namely, the current speed, the future speed after a second, and b The past speed before seconds, and ignore the character's vertical speed. Therefore, each velocity can actually be expressed as a two-dimensional vector, that is, the current velocity (V_cur_x, V_cur_y), the future velocity (V_fur_x, V_fur_y), and the past velocity (V_pre_x, V_pre_y).
  • V_char (V_cur_x, V_cur_y, V_fur_x, V_fur_y, V_pre_x, V_pre_y) to describe the motion state of the character.
  • each animation key frame is pre-calculated to obtain the six-dimensional motion vector V_anim to which it belongs.
  • the algorithm needs to find several V_anims closest to V_char among all animation key frames as candidates for the final output animation. In order to facilitate the calculation, you can directly use the Euclidean distance to measure the closeness between two six-dimensional vectors.
  • the future speed can be calculated through subsequent animation calculations, but for the AI character, the future speed is unknown, so a prediction algorithm is needed to predict the future speed. Since AI usually moves along the path calculated by the pathfinding algorithm, the future speed can be predicted through the path along which it is followed.
  • the prediction algorithm uses the maximum speed V_max and the maximum acceleration A_max of the character movement. As shown in Figure 12, it is assumed that the acceleration reaches V_max at the current position and continues for a period of time. The time corresponding to each section (acceleration section, deceleration section, full speed section) can be calculated. According to this movement mode, it can be calculated which part of the above acceleration-deceleration-re-acceleration the AI character is in after a second, and the predicted future speed after a second can also be calculated.
  • the algorithm After obtaining the animation video composed of N candidate animation video frames that are closest to the current motion state, the algorithm will select the animation video composed of a video frame whose foot position is closest to the current character's foot position as the corresponding animation video. Real-time animation output.
  • Step 803 Determine the complete animation output according to the real-time animation output.
  • the original animation video matching the target object is determined, wherein the original animation video is used to characterize the motion state of the target object in different usage scenarios;
  • the video is preprocessed to obtain the key video frame in the original animation video and the motion data corresponding to the key video frame; according to the motion data corresponding to the key video frame, determine the motion data that matches the target object Set; based on the real-time motion state of the target object, determine the displacement parameter of the target object; based on the motion data set matching the target object, obtain the displacement parameter of the target object through the displacement parameter of the target object
  • the animation video matching the real-time motion state can accurately and efficiently obtain the animation video matching the real-time motion state of the target object in the original animation video. Compared with the traditional technology, it guarantees the information processing of the user’s electronic equipment. With the same ability, the number of AI characters supported and the corresponding animation quality have been greatly improved, effectively improving the user experience.
  • the embodiments of the application disclose an animation video processing method, device, electronic equipment and storage medium, which can determine the original animation video matching the target object, preprocess the original animation video, and obtain key videos in the original animation video Frame and the motion data corresponding to the key video frame; determine the motion data set matching the target object; determine the displacement parameter of the target object; based on the motion data set matching the target object, pass the target
  • the displacement parameter of the object is used to obtain an animation video matching the real-time motion state of the target object.
  • the present disclosure can accurately and efficiently obtain the animation video matching the real-time motion state of the target object from the original animation video. Compared with the traditional technology, it supports The number of AI characters and the corresponding animation quality have been greatly improved, effectively improving the user experience.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种动画视频处理方法、装置、电子设备及存储介质,包括:确定与目标对象相匹配的原始动画视频,对原始动画视频进行预处理,获取原始动画视频中的关键视频帧和关键视频帧所对应的运动数据;确定与所述目标对象相匹配的运动数据集合;确定所述目标对象的位移参数;基于与所述目标对象相匹配的运动数据集合,通过所述目标对象的位移参数,获取与所述目标对象的实时运动状态相匹配的动画视频。本发明能够在原始动画视频中获取与目标对象的实时运动状态相匹配的动画视频,在保证用户的电子设备的信息处理能力不变的情况下,提升所支持的AI角色数量和相应的动画质量。

Description

一种动画视频处理方法、装置、电子设备及存储介质
本申请基于申请号为202010085370.5、申请日为2020年02月10日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本公开涉及信息处理技术,尤其涉及动画视频处理方法、装置、电子设备及存储介质
背景技术
人工智能(AI Artificial Intelligence)角色是指游戏场景中由计算机控制(而非用户控制)的游戏角色。例如在一些大型多人在线角色扮演游戏(MMO RPG Massive Multiplayer Online Role Playing Game)中,AI角色较为常见。
AI角色可以同用户控制的游戏角色一样,在游戏场景中自由移动,且支持动画、音效等功能,其位置也可以准确地由服务器同步到各个客户端。此外,AI角色的行为逻辑由行为树控制,AI策划可以对其行为进行配置。
传统方案中,在对AI角色进行处理的过程中,动画状态机面对复杂运动行为时其扩展性较差,同样,Motion Matching算法为了覆盖角色的各种运动行为,该算法要求录制大量的动作捕捉数据作为数据基础,以确保无论何种运动状态都能找到相对接近的动画片段,这一过程占用了大量的系统开销,验证影响了用户的使用体验。
发明内容
有鉴于此,本公开实施例提供一种动画视频处理方法、装置、电子设备及存储介质,本公开实施例的技术方案是这样实现的:
本公开提供了一种动画视频处理方法,所述方法包括:
确定与目标对象相匹配的原始动画视频,其中,所述原始动画视频用于表征所述目标对象在不同使用场景中的运动状态;
对所述原始动画视频进行预处理,获取所述原始动画视频中的关键视频帧和所述关键视频帧所对应的运动数据;
根据所述关键视频帧对应的运动数据,确定与所述目标对象相匹配的运动数据集合;
基于所述目标对象的实时运动状态,确定所述目标对象的位移参数;
基于与所述目标对象相匹配的运动数据集合,通过所述目标对象的位移参数,获取与所述目标对象的实时运动状态相匹配的动画视频。
本公开实施例还提供了一种动画视频处理装置,其中,所述装置包括:
信息传输模块,配置为确定与目标对象相匹配的原始动画视频,其中,所述原始动画视频用于表征所述目标对象在不同使用场景中的运动状态;
信息处理模块,配置为对所述原始动画视频进行预处理,获取所述原始动画视频中的关键视频帧和所述关键视频帧所对应的运动数据;
所述信息处理模块,配置为根据所述关键视频帧对应的运动数据,确定与所述目标对象相匹配的运动数据集合;
所述信息处理模块,配置为基于所述目标对象的实时运动状态,确定所述目标对象的位移参数;
所述信息处理模块,配置为基于与所述目标对象相匹配的运动数据集合,通过所述目标对象的位移参数,获取与所述目标对象的实时运动状态相匹配的动画视频。
上述方案中,
所述信息处理模块,配置为确定所述目标对象相对应的动画视频输出环境;
所述信息处理模块,配置为根据所述动画视频输出环境,确定所述目标对象在不同使用场景中的运动状态;
所述信息处理模块,配置为根据所述目标对象在不同使用场景中的运动状态,对捕捉对象的运动动作进行动态捕捉,形成与所述目标对象相匹配的原始动画视频。
上述方案中,
所述信息处理模块,配置为侦测所述原始动画视频中的所有视频帧中所述目标对象的肢体落点位置;
所述信息处理模块,配置为当所述目标对象的肢体落点位置位于相应的水平面中,或者,
与相应的参照物接触时,确定包括所述目标对象的肢体落点位置的视频帧为关键视频帧;
所述信息处理模块,配置为基于所述关键视频帧,确定所述目标对象在不同使用场景中的位移参数作为所述关键视频帧所对应的运动数据。
上述方案中,
所述信息处理模块,配置为当所述目标对象的肢体为所述目标对象的左下肢和右下肢时,确定所述目标对象的左下肢速度和所述目标对象的右下肢速度;
所述信息处理模块,配置为当所述目标对象的左下肢速度和所述目标对象的右下肢速度的差值达到负向极值时,确定所述述目标对象的左下肢位置位于相应的水平面中;
所述信息处理模块,配置为当所述目标对象的左下肢速度和所述目标对象的右下肢速度的差值达到正向极值时,确定所述述目标对象的右下肢位置位于相应的水平面中。
上述方案中,
所述信息处理模块,配置为当所述目标对象的肢体为所述目标对象的左上肢和右上肢时,确定所述目标对象的左上肢速度和所述目标对象的右上肢速度;
所述信息处理模块,配置为当所述目标对象的左上肢速度和所述目标对象的右上肢速度的差值 达到负向极值时,确定所述述目标对象的左上肢位置与相应的参照物接触;
所述信息处理模块,配置为当所述目标对象的左上肢速度和所述目标对象的右上肢速度的差值达到正向极值时,确定所述述目标对象的右上肢位置与相应的参照物接触。
上述方案中,
所述信息处理模块,配置为基于寻路算法进程,确定所述目标对象的移动路径;
所述信息处理模块,配置为根据所述目标对象相匹配的运动数据集合,确定所述目标对象相匹配的最大位移参数和相应的最大加位移参数;
所述信息处理模块,配置为根据所述目标对象的移动路径、所述目标对象相匹配的最大位移参数和相应的最大加位移参数,确定所述目标对象在不同时刻的位移参数。
上述方案中,
所述信息处理模块,配置为基于所述目标对象的位移参数,确定与所述目标对象当前运动状态相对应的第一运动向量;
所述信息处理模块,配置为基于与所述目标对象相匹配的运动数据集合,确定与每个关键视频帧分别对应的第二运动向量;
所述信息处理模块,配置为根据所述第一运动向量,在所述第二运动向量相对应的搜索二叉树结构中确定与所述第一运动向量相匹配的第二运动向量;
所述信息处理模块,配置为根据与所述第一运动向量相匹配的第二运动向量,确定对应的关键视频帧,并通过所确定的关键视频帧,获取与所述目标对象的实时运动状态相匹配的动画视频。
上述方案中,
所述信息处理模块,配置为当所述第一运动向量表征所述目标对象的左下肢位置位于相应的水平面中时,通过所述第二运动向量相对应的右下肢搜索二叉树结构,确定与所述第一运动向量相匹配的第二运动向量,或者,
所述信息处理模块,配置为当所述第一运动向量表征所述目标对象的右下肢位置位于相应的水平面中时,通过所述第二运动向量相对应的左下肢搜索二叉树结构,确定与所述第一运动向量相匹配的第二运动向量。
上述方案中,
所述信息处理模块,配置为根据所述关键视频帧,确定不同的待输出动画视频;
所述信息处理模块,配置为确定所述不同的待输出动画视频中所述目标对象肢体落点位置与所述目标对象当前肢体落点位置距离最小的待输出动画视频为与所述目标对象的实时运动状态相匹配的动画视频。
上述方案中,
所述信息处理模块,配置为获取与所述动画视频输出环境相对应的目标分辨率;
所述信息处理模块,配置为基于所述目标分辨率,对所述与所述目标对象的实时运动状态相匹配的动画视频进行分辨率增强处理,以实现与所述目标对象的实时运动状态相匹配的动画视频与动 画视频输出环境相匹配。
本公开实施例还提供了一种电子设备,所述电子设备包括:
存储器,用于存储可执行指令;
处理器,用于运行所述存储器存储的可执行指令时,实现前序的动画视频处理方法。
本公开实施例还提供了一种计算机可读存储介质,存储有可执行指令,所述可执行指令被处理器执行时实现前序的动画视频处理方法。
本公开实施例具有以下有益效果:
本公开实施例所示的技术方案通过确定与目标对象相匹配的原始动画视频,其中,所述原始动画视频用于表征所述目标对象在不同使用场景中的运动状态;对所述原始动画视频进行预处理,获取所述原始动画视频中的关键视频帧和所述关键视频帧所对应的运动数据;根据所述关键视频帧对应的运动数据,确定与所述目标对象相匹配的运动数据集合;基于所述目标对象的实时运动状态,确定所述目标对象的位移参数;基于与所述目标对象相匹配的运动数据集合,通过所述目标对象的位移参数,获取与所述目标对象的实时运动状态相匹配的动画视频,由此,能够准确高效地在原始动画视频中获取与目标对象的实时运动状态相匹配的动画视频,相比于传统技术,在保证用户的电子设备的信息处理能力不变的情况下,所支持的AI角色数量和相应的动画质量均有大幅度提升,有效提升用户的使用体验。
附图说明
图1为本公开实施例提供的动画视频处理方法的使用场景示意图;
图2为本公开实施例提供的电子设备结构示意图;
图3为本公开实施例提供的动画视频处理方法一个可选的流程示意图;
图4为本公开实施例提供的动画视频处理方法一个可选的流程示意图;
图5为本公开实施例提供的动画视频处理方法一个可选的流程示意图;
图6为本公开实施例提供的动画视频处理方法的一个前端显示示意图;
图7为本公开实施例提供的动画视频处理方法的一个前端显示示意图;
图8为本公开实施例提供的动画视频处理方法一个可选的流程示意图;
图9为本公开实施例提供的动画视频处理方法的显示效果示意图;
图10A为本公开实施例提供的动画视频处理方法一个可选的流程示意图;
图10B为本公开实施例提供的动画视频处理方法一个可选的流程示意图;
图11为本公开实施例提供的动画视频处理方法的显示效果示意图;
图12为本公开实施例提供的动画视频处理方法一个可选的处理过程示意图。
具体实施方式
为了使本公开的目的、技术方案和优点更加清楚,下面将结合附图对本公开作进一步地详细描述,所描述的实施例不应视为对本公开的限制,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本公开保护的范围。
在以下的描述中,涉及到“一些实施例”,其描述了所有可能实施例的子集,但是可以理解,“一些实施例”可以是所有可能实施例的相同子集或不同子集,并且可以在不冲突的情况下相互结合。
对本公开实施例进行进一步详细说明之前,对本公开实施例中涉及的名词和术语进行说明,本公开实施例中涉及的名词和术语适用于如下的解释。
1)响应于,用于表示所执行的操作所依赖的条件或者状态,当满足所依赖的条件或状态时,所执行的一个或多个操作可以是实时的,也可以具有设定的延迟;在没有特别说明的情况下,所执行的多个操作不存在执行先后顺序的限制。
2)终端,包括但不限于:普通终端、专用终端,其中所述普通终端与发送通道保持长连接和/或短连接,所述专用终端与所述发送通道保持长连接。
3)客户端,终端中实现特定功能的载体,例如移动客户端(APP)是移动终端中特定功能的载体,例如执行支付消费功能或者是购买理财产品的功能。
4)虚拟环境:是应用程序在终端上运行时显示(或提供)的虚拟环境。该虚拟环境可以是对真实世界的仿真环境,也可以是半仿真半虚构的三维环境,还可以是纯虚构的三维环境。虚拟环境可以是二维虚拟环境、2.5维虚拟环境和三维虚拟环境中的任意一种,下述实施例以虚拟环境是三维虚拟环境来举例说明,但对此不加以限定。可选地,该虚拟环境还用于至少两个虚拟虚拟对象之间的虚拟环境对战。可选地,该虚拟环境还用于至少两个虚拟虚拟对象之间使用虚拟枪械进行对战。可选地,该虚拟环境还用于在目标区域范围内,至少两个虚拟对象之间使用虚拟枪械进行对战,该目标区域范围会随虚拟环境中的时间推移而不断变小。
5)虚拟道具:是指在虚拟环境通过发射子弹进行攻击的虚拟武器,或者发射箭簇的虚拟弓箭、虚拟弹弓,虚拟对象在虚拟环境中可以对虚拟枪械进行捡拾,并通过捡拾得到的虚拟枪械进行攻击。
其中,可选地,该虚拟对象可以是通过客户端上的操作进行控制的用户虚拟对象,也可以是通过训练设置在虚拟场景对战中的人工智能(Artificial Intelligence,AI),还可以是设置在虚拟场景互动中的非用户虚拟对象(Non-Player Character,NPC)。可选地,该虚拟对象可以是在虚拟场景中进行竞技的虚拟人物。可选地,该虚拟场景中参与互动的虚拟对象的数量可以是预先设置的,也可以是根据加入互动的客户端的数量动态确定的。
其中,以射击类游戏为例,用户可以控制虚拟对象在该虚拟场景的天空中自由下落、滑翔或者打开降落伞进行下落等,在陆地上中跑动、跳动、爬行、弯腰前行等,也可以控制虚拟对象在海洋中游泳、漂浮或者下潜等,当然,用户也可以控制虚拟对象乘坐虚拟载具在该虚拟场景中进行移动,例如,该虚拟载具可以是虚拟汽车、虚拟飞行器、虚拟游艇等,在此仅以上述场景进行举例说明, 本公开实施例对此不作具体限定。用户也可以控制虚拟对象通过虚拟武器与其他虚拟对象进行战斗等方式的互动,该虚拟武器可以是冷兵器,也可以是热兵器,本公开对虚拟武器的类型不作具体限定。
6)AI角色:游戏中非玩家的角色,比如敌人(可以是机器操控)或者队友所操控的游戏中的人物。
7)动画状态机:以不同状态及其之间的转换来驱动角色动画表现的技术手段。
8)运动状态(Locomotion):包括游戏目标对象(游戏角色)的走、跑、转向等基本运动行为。
9)动作捕捉技术:将真游戏角色的运动状态通过传感器记录并转化成动画数据。
10)运动匹配(Motion Matching):一种以大量动作捕捉数据来驱动角色动画的技术。
11)同步点技术(Sync Point):在动画上标记出同步点(一般为左/右脚落地的时刻)以保证两个动画在切换时双脚的位置大致吻合。
12)KD Tree(k-dimensional树):一种二叉树的树状的数据结构,可以用于在大量数据中快速查找指定坐标的最近邻的目标。
13)欧氏距离:两个坐标之间的一种距离衡量方式,计算方法为对每个空间维度的坐标差平方后求和再开根号,在三维空间中对应两个点之间的物理距离。
其中,本公开中提供的方法可以应用于虚拟现实应用程序、三维地图程序、军事仿真程序、第一人称射击游戏(First-person shooting game,FPS)、多人在线战术竞技游戏(Multiplayer Online Battle Arena Games,MOBA)等,下述实施例是以在游戏中的应用来举例说明。
基于虚拟环境的游戏往往由一个或多个游戏世界的地图构成,游戏中的虚拟环境模拟现实世界的场景,用户可以操控游戏中的虚拟对象在虚拟环境中进行行走、跑步、跳跃、射击、格斗、驾驶、切换使用虚拟武器、使用虚拟武器攻击其他虚拟对象等动作,交互性较强,并且多个用户可以在线组队进行竞技游戏。用户控制虚拟对象使用虚拟武器对目标虚拟对象发起攻击时,用户根据目标虚拟对象所在的位置,需要进行移动(例如奔跑或攀登),同样的游戏中的AI角色也需要在游戏界面中进行移动。
其中,第一人称射击游戏(First-person Shooting,FPS)是指用户能够以第一人称视角进行的射击游戏,游戏中的虚拟环境的画面是以第一虚拟对象的视角对虚拟环境进行观察的画面。在游戏中,至少两个虚拟对象在虚拟环境中进行单局对战模式,虚拟对象通过躲避其他虚拟对象发起的攻击和虚拟环境中存在的危险(比如,毒气圈、沼泽地等)来达到在虚拟环境中存活的目的,当虚拟对象在虚拟环境中的生命值为零时,虚拟对象在虚拟环境中的生命结束,最后存活在虚拟环境中的虚拟对象是获胜方。可选地,该对战以第一个客户端加入对战的时刻作为开始时刻,以最后一个客户端退出对战的时刻作为结束时刻,每个客户端可以控制虚拟环境中的一个或多个虚拟对象。可选地,该对战的竞技模式可以包括单人对战模式、双人小组对战模式或者多人大组对战模式,本公开实施例对对战模式不加以限定。
图1为本公开实施例提供的动画视频处理方法的使用场景示意图,参见图1,终端(包括终端 10-1和终端10-2)上设置有能够显示相应动画视频处理的软件的客户端,例如不同的游戏的客户端或插件,用户通过相应的客户端可以获得动画视频处理并进行展示,并在游戏进程的运行过程中触发相应的动画视频处理进程(例如在不同的运动路线中奔跑或者攀登);终端通过网络300连接服务器200,网络300可以是广域网或者局域网,又或者是二者的组合,使用无线链路实现数据传输。
作为一个示例,服务器200用于布设所述动画视频处理装置以实现本公开所提供的动画视频处理方法,以通过确定与目标对象相匹配的原始动画视频,其中,所述原始动画视频用于表征所述目标对象在不同使用场景中的运动状态;对所述原始动画视频进行预处理,获取所述原始动画视频中的关键视频帧和所述关键视频帧所对应的运动数据;根据所述关键视频帧对应的运动数据,确定与所述目标对象相匹配的运动数据集合;基于所述目标对象的实时运动状态,确定所述目标对象的位移参数;基于与所述目标对象相匹配的运动数据集合,通过所述目标对象的位移参数,获取与所述目标对象的实时运动状态相匹配的动画视频。
当然,本公开所提供的动画视频处理装置可以应用于不同的游戏环境包括但不限虚拟现实应用程序、三维地图程序、军事仿真程序、第一人称射击游戏(First-person shooting game,FPS)、多人在线战术竞技游戏(Multiplayer Online Battle Arena Games,MOBA)等,最终在用户界面(User Interface,UI)上呈现出与相应的对应的虚拟道具并进行控制。用户在当前显示界面中的运动数据(例如在虚拟环境中奔跑或者攻击)还可以供其他应用程序调用。
当然在通过动画视频处理装置对不同的动画视频处理,实现获取与目标对象的实时运动状态相匹配的动画视频,具体包括:确定与目标对象相匹配的原始动画视频,其中,所述原始动画视频用于表征所述目标对象在不同使用场景中的运动状态;对所述原始动画视频进行预处理,获取所述原始动画视频中的关键视频帧和所述关键视频帧所对应的运动数据;根据所述关键视频帧对应的运动数据,确定与所述目标对象相匹配的运动数据集合;基于所述目标对象的实时运动状态,确定所述目标对象的位移参数;基于与所述目标对象相匹配的运动数据集合,通过所述目标对象的位移参数,获取与所述目标对象的实时运动状态相匹配的动画视频。
下面对本公开实施例的动画视频处理装置的结构做详细说明,动画视频处理装置可以各种形式来实施,如带有动画视频处理装置处理功能的专用终端,也可以为设置有动画视频处理装置处理功能的电子设备(手机、或平板电脑),例如前序图1中的终端10-1或者终端10-2。图2为本公开实施例提供的电子设备组成结构示意图,可以理解,图2仅仅示出了动画视频处理装置的示例性结构而非全部结构,根据需要可以实施图2示出的部分结构或全部结构。
本公开实施例提供的动画视频处理装置包括:至少一个处理器201、存储器202、用户接口203和至少一个网络接口204。动画视频处理装置中的各个组件通过总线系统205耦合在一起。可以理解,总线系统205用于实现这些组件之间的连接通信。总线系统205除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图2中将各种总线都标为总线系统205。
其中,用户接口203可以包括显示器、键盘、鼠标、轨迹球、点击轮、按键、按钮、触感板或 者触摸屏等。
可以理解,存储器202可以是易失性存储器或非易失性存储器,也可包括易失性和非易失性存储器两者。本公开实施例中的存储器202能够存储数据以支持终端(如10-1)的操作。这些数据的示例包括:用于在终端(如10-1)上操作的任何计算机程序,如操作系统和应用程序。其中,操作系统包含各种系统程序,例如框架层、核心库层、驱动层等,用于实现各种基础业务以及处理基于硬件的任务。应用程序可以包含各种应用程序。
在一些实施例中,本公开实施例提供的动画视频处理装置可以采用软硬件结合的方式实现,作为示例,本公开实施例提供的动画视频处理装置可以是采用硬件译码处理器形式的处理器,其被编程以执行本公开实施例提供的动画视频处理方法。例如,硬件译码处理器形式的处理器可以采用一个或多个应用专用集成电路(ASIC,Application Specific Integrated Circuit)、DSP、可编程逻辑器件(PLD,Programmable Logic Device)、复杂可编程逻辑器件(CPLD,Complex Programmable Logic Device)、现场可编程门阵列(FPGA,Field-Programmable Gate Array)或其他电子元件。
作为本公开实施例提供的动画视频处理装置采用软硬件结合实施的示例,本公开实施例所提供的动画视频处理装置可以直接体现为由处理器201执行的软件模块组合,软件模块可以位于存储介质中,存储介质位于存储器202,处理器201读取存储器202中软件模块包括的可执行指令,结合必要的硬件(例如,包括处理器201以及连接到总线205的其他组件)完成本公开实施例提供的动画视频处理方法。
作为示例,处理器201可以是一种集成电路芯片,具有信号的处理能力,例如通用处理器、数字信号处理器(DSP,Digital Signal Processor),或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等,其中,通用处理器可以是微处理器或者任何常规的处理器等。
作为本公开实施例提供的动画视频处理装置采用硬件实施的示例,本公开实施例所提供的装置可以直接采用硬件译码处理器形式的处理器201来执行完成,例如,被一个或多个应用专用集成电路(ASIC,Application Specific Integrated Circuit)、DSP、可编程逻辑器件(PLD,Programmable Logic Device)、复杂可编程逻辑器件(CPLD,Complex Programmable Logic Device)、现场可编程门阵列(FPGA,Field-Programmable Gate Array)或其他电子元件执行实现本公开实施例提供的动画视频处理方法。
本公开实施例中的存储器202用于存储各种类型的数据以支持动画视频处理装置的操作。这些数据的示例包括:用于在动画视频处理装置上操作的任何可执行指令,如可执行指令,实现本公开实施例的从动画视频处理方法的程序可以包含在可执行指令中。
在另一些实施例中,本公开实施例提供的动画视频处理装置可以采用软件方式实现,图2示出了存储在存储器202中的动画视频处理装置,其可以是程序和插件等形式的软件,并包括一系列的模块,作为存储器202中存储的程序的示例,可以包括动画视频处理装置,动画视频处理装置中包括以下的软件模块信息传输模块2081和信息处理模块2082。当动画视频处理装置中的软件模块被处理器201读取到RAM中并执行时,将实现本公开实施例提供的动画视频处理方法,其中,动画 视频处理装置中各个软件模块的功能,包括:
信息传输模块2081,确定与目标对象相匹配的原始动画视频,其中,所述原始动画视频用于表征所述目标对象在不同使用场景中的运动状态;
信息处理模块2082,用于对所述原始动画视频进行预处理,获取所述原始动画视频中的关键视频帧和所述关键视频帧所对应的运动数据;
所述信息处理模块2082,用于根据所述关键视频帧对应的运动数据,确定与所述目标对象相匹配的运动数据集合;
所述信息处理模块2082,用于基于所述目标对象的实时运动状态,确定所述目标对象的位移参数;
所述信息处理模块2082,用于基于与所述目标对象相匹配的运动数据集合,通过所述目标对象的位移参数,获取与所述目标对象的实时运动状态相匹配的动画视频。
结合图2示出的动画视频处理装置说明本公开实施例提供的动画视频处理方法,参见图3,图3为本公开实施例提供的动画视频处理方法一个可选的流程示意图,可以理解地,图3所示的步骤可以由运行动画视频处理装置的各种电子设备执行,例如可以是如带有动画视频处理装置的各种游戏设备,其中,带有动画视频处理装置的专用终端可以封装于图1所示的终端101-1中,以执行前序图2所示的动画视频处理装置中的相应软件模块。下面针对图3示出的步骤进行说明。
步骤301:动画视频处理装置确定与目标对象相匹配的原始动画视频。
其中,所述原始动画视频用于表征所述目标对象在不同使用场景中的运动状态,本公开实施例所涉及的不同使用场景包括但不限于:2D视频游戏场景3D体感游戏场景和虚拟现实交互使用场景。
在本公开的一些实施例中,目标对象可以是不同使用场景内的可移动的对象。该场景可以是2D的使用场景,也可以是3D的使用场景。以游戏场景为例,游戏场景是指在游戏对局过程中营造出的供游戏角色进行游戏竞技的虚拟场景,如虚拟房屋、虚拟岛屿、虚拟地图等。目标对象可以是游戏场景中的游戏角色,如玩家控制的游戏角色,或者计算机控制的AI角色。在一些其它示例中,目标对象还可以是游戏场景中除游戏角色之外的可移动物,如怪物、车辆、舰船、飞行物等任意可移动的物体。
在本公开的一些实施例中,确定与目标对象相匹配的原始动画视频,可以通过以下方式实现:
确定所述目标对象相对应的动画视频输出环境;根据所述动画视频输出环境,确定所述目标对象在不同使用场景中的运动状态;根据所述目标对象在不同使用场景中的运动状态,对捕捉对象的运动动作进行动态捕捉,形成与所述目标对象相匹配的原始动画视频。其中,结合前序图1所示,终端(包括终端10-1和终端10-2)上设置有能够显示相应AI角色的软件的客户端,例如不同的游戏的客户端或插件,用户通过相应的客户端可以获取AI角色并与用户控制的角色进行互动并进行展示,并在虚拟资源变化过程中触发相应的动画视频处理进程(例如虚拟对象在虚拟环境中可以对进行奔跑或攻击);通过动作捕捉数据来完成对角色不同运动行为的覆盖,形成与目标对象相匹配的原始动画视频,可以保证对角色不同运动行为的覆盖的完整性。
步骤302:动画视频处理装置对所述原始动画视频进行预处理,获取所述原始动画视频中的关键视频帧和所述关键视频帧所对应的运动数据。
在本公开的一些实施例中,对所述原始动画视频进行预处理,获取所述原始动画视频中的关键视频帧和所述关键视频帧所对应的运动数据,可以通过以下方式实现:
侦测所述原始动画视频中的所有视频帧中所述目标对象的肢体落点位置;当所述目标对象的肢体落点位置位于相应的水平面中或者,当所述目标对象的肢体落点位置与相应的参照物接触时,确定包括所述目标对象的肢体落点位置的视频帧为关键视频帧;基于所述关键视频帧,确定所述目标对象在不同使用场景中的位移参数作为所述关键视频帧所对应的运动数据。其中,以游戏场景为例,AI角色是按照寻路算法预先计算好的路径进行移动,而寻路算法所生成的运动轨迹一般为折线段而非弧线,AI角色的运动模式较为简单并且可以拆分成不同方向的运动动画视频,例如,八方向(前后方向、左右方向、左前方向、右前方向、左后方向、右后方向)的起步走动画视频、起跑动画视频,跑动中八方向的转向动画视频、急动画视频,即可覆盖AI角色所有运动状态。
继续结合图2示出的动画视频处理装置说明本公开实施例提供的动画视频处理方法,参见图4,图4为本公开实施例提供的动画视频处理方法一个可选的流程示意图,可以理解地,图4所示的步骤可以由运行动画视频处理装置的各种电子设备执行,例如可以是如带有动画视频处理装置的各种游戏设备,其中,带有动画视频处理装置的专用终端可以封装于图1所示的终端101-1中,以执行前序图2所示的动画视频处理装置中的相应软件模块。下面针对图4示出的步骤进行说明。
步骤401:当所述目标对象的肢体为所述目标对象的左下肢和右下肢时,确定所述目标对象的左下肢速度和所述目标对象的右下肢速度。
步骤402:当所述目标对象的左下肢速度和所述目标对象的右下肢速度的差值达到负向极值时,确定所述述目标对象的左下肢位置位于相应的水平面中。
步骤403:当所述目标对象的左下肢速度和所述目标对象的右下肢速度的差值达到正向极值时,确定所述述目标对象的右下肢位置位于相应的水平面中。
其中,以游戏环境中的AI角色的奔跑为例,AI角色在奔跑的过程中,左下肢(左脚)和右下肢(右脚)不断地交替着地,运动过程中,支撑脚落地后速度降为0而另一只脚则有正向的速度,因此当两只脚交替成为支撑脚时,两者的速度差会在负向极值与正向极值之间波段变化,当目标对象的左脚速度和目标对象的右脚速度的差值达到负向极值时,目标对象的左脚已经着地,因此,视频帧中目标对象的左脚已经着地的视频帧即为关键视频帧。
在本公开的一些实施例中,侦测所述原始动画视频中的所有视频帧中所述目标对象的肢体落点位置,可以通过以下方式实现:
当所述目标对象的肢体为所述目标对象的左上肢和右上肢时,确定所述目标对象的左上肢速度和所述目标对象的右上肢速度;当所述目标对象的左上肢速度和所述目标对象的右上肢速度的差值达到负向极值时,确定所述述目标对象的左上肢位置与相应的参照物接触;当所述目标对象的左上肢速度和所述目标对象的右上肢速度的差值达到正向极值时,确定所述述目标对象的右上肢位置与 相应的参照物接触。
其中,以游戏环境中的AI角色的攀岩为例,AI角色在攀岩的过程中,左上肢(左手)和右上肢(右手)不断地交替与作为参照物的山岩接触,运动过程中,支撑手保持不动,而另一只手则有正向的速度,因此当两只手交替成为支撑点时,两者的速度差会在负向极值与正向极值之间波段变化,当目标对象的左手速度和目标对象的右手速度的差值达到负向极值时,目标对象的左手已经作为支撑点与山岩保持接触,因此,视频帧中目标对象的左手已经与参照物接触的视频帧即为关键视频帧。
步骤303:动画视频处理装置根据所述关键视频帧对应的运动数据,确定与所述目标对象相匹配的运动数据集合。
步骤304:动画视频处理装置基于所述目标对象的实时运动状态,确定所述目标对象的位移参数。
在本公开的一些实施例中,基于所述目标对象的实时运动状态,确定所述目标对象的位移参数,可以通过以下方式实现:
基于寻路算法进程,确定所述目标对象的移动路径;根据所述目标对象相匹配的运动数据集合,确定所述目标对象相匹配的最大位移参数和相应的最大加位移参数;根据所述目标对象的移动路径、所述目标对象相匹配的最大位移参数和相应的最大加位移参数,确定所述目标对象在不同时刻的位移参数。
在本公开的一些实施例中,目标对象在场景内的规划移动路径,是指根据目标对象的起止位置(包括起点位置和终点位置),采用自动寻路算法规划出的移动路径。需要说明的是,该规划移动路径并不一定是实际移动路径,因为当目标对象在场景内实际移动的过程中,可能会遇到一些障碍物(如墙壁、台阶、石头等静态障碍物,又如场景内的其它对象、可移动物等动态障碍物),这些障碍物会阻挡目标对象按照规划移动路径移动,目标对象会绕过障碍物之后继续向终点位置移动。另外,在生成规划移动路径时会避开静态障碍物,所以规划移动路径上是不存在静态障碍物的,但是目标对象在实际移动时,由于与其它对象彼此之间的碰撞挤压,目标对象有时会少许偏离规划移动路径,导致与场景中的静态障碍物也有可能发生碰撞。目标对象的起止位置可以由用户决定,也可以由服务器决定。例如,当目标对象为游戏场景中的AI角色时,该目标对象的起止位置和规划移动路径均可由服务器决定。以目标对象为游戏中的AI角色为例,AI角色的移动速度通常与玩家控制的游戏角色的移动速度在同一量级,并且不会频繁瞬移,这意味着AI角色在两帧之间的位置是足够接近的。以60帧每秒的刷新速率、AI角色的移动速度为10米每秒为例,AI角色在两帧之间的位置差仅为0.16m左右,这远小于整个场景的尺度。
步骤305:动画视频处理装置基于与所述目标对象相匹配的运动数据集合,通过所述目标对象的位移参数,获取与所述目标对象的实时运动状态相匹配的动画视频。
在本公开的一些实施例中,所述运动数据集合可以包括同一AI角色的不同姿态下的运动状态数据,也可以包括同一AI角色在不同虚拟环境中的同一运动姿态下的运动数据,结合前序图2所示的 实施例,以第一人称射击类3D游戏(FPS First-person shooting game)为例,终端可以控制AI角色在该虚拟交互场景的陆地中进行运动或执行某些动作等,当然也可以控制AI角色在浅滩、沼泽、山涧等虚拟环境中进行运动或执行某些动作等。其中AI角色在陆地中运动具体可以是在陆地上跑动、跳动、爬行、弯腰前行等。由于不同虚拟环境中的AI角色的同一运动姿态下的运动数据均不相同(例如同一AI角色在陆地、浅滩、沼泽、山涧中的奔跑速度均不相同),因此,还可以确定所述目标对象在虚拟环境中的速度参数作为所述关键视频帧所对应的运动数据,以提升用户控制作为目标对象的AI角色在不同虚拟环境中运动的使用体验。
继续结合图2示出的动画视频处理装置说明本公开实施例提供的动画视频处理方法,参见图5,图5为本公开实施例提供的动画视频处理方法一个可选的流程示意图,可以理解地,图5所示的步骤可以由运行动画视频处理装置的各种电子设备执行,例如可以是如带有动画视频处理装置的各种游戏设备,其中,带有动画视频处理装置的专用终端可以封装于图1所示的终端101-1中,以执行前序图2所示的动画视频处理装置中的相应软件模块。下面针对图5示出的步骤进行说明。
步骤501:基于所述目标对象的位移参数,确定与所述目标对象当前运动状态相对应的第一运动向量。
步骤502:基于与所述目标对象相匹配的运动数据集合,确定与每个关键视频帧分别对应的第二运动向量。
步骤503:根据所述第一运动向量,在所述第二运动向量相对应的搜索二叉树结构中确定与所述第一运动向量相匹配的第二运动向量。
步骤504:根据与所述第一运动向量相匹配的第二运动向量,确定对应的关键视频帧,并通过所确定的关键视频帧,获取与所述目标对象的实时运动状态相匹配的动画视频。
在本公开的一些实施例中,根据所述第一运动向量,在所述第二运动向量相对应的搜索二叉树结构中确定与所述第一运动向量相匹配的第二运动向量,可以通过以下方式实现:
当所述第一运动向量表征所述目标对象的左下肢位置位于相应的水平面中时,通过所述第二运动向量相对应的右下肢搜索二叉树结构,确定与所述第一运动向量相匹配的第二运动向量,或者,
当所述第一运动向量表征所述目标对象的右下肢位置位于相应的水平面中时,通过所述第二运动向量相对应的左下肢搜索二叉树结构,确定与所述第一运动向量相匹配的第二运动向量。
在本公开的一些实施例中,通过所确定的关键视频帧,获取与所述目标对象的实时运动状态相匹配的动画视频,可以通过以下方式实现:
根据所述关键视频帧,确定不同的待输出动画视频;确定所述不同的待输出动画视频中所述目标对象肢体落点位置与所述目标对象当前肢体落点位置距离最小的待输出动画视频为与所述目标对象的实时运动状态相匹配的动画视频。
在本公开的一些实施例中,动画视频处理方法还包括:
获取与所述动画视频输出环境相对应的目标分辨率;基于所述目标分辨率,对所述与所述目标对象的实时运动状态相匹配的动画视频进行分辨率增强处理,以实现与所述目标对象的实时运动状 态相匹配的动画视频与动画视频输出环境相匹配。由于,目标对象相对应的动画视频输出环境并不相同,通过对目标对象的实时运动状态相匹配的动画视频进行分辨率增强处理,可以使得用户观看到更适合AI角色运动状态,提升用户的使用体验。
下面以带有AI角色的游戏场景为例对本公开实施例所提供的动画视频处理方法进行说明,其中,设置有能够显示相应AI角色的软件的客户端,例如不同的游戏的客户端或插件,用户通过相应的客户端可以获取AI角色并与用户控制的角色进行互动并进行展示,并在虚拟资源变化过程中触发相应的动画视频处理进程(例如虚拟对象在虚拟环境中可以对进行奔跑或攻击);终端通过网络连接服务器,网络可以是广域网或者局域网,又或者是二者的组合,使用无线链路实现数据传输。
继续参考图6和图7,其中,图6为本公开实施例提供的动画视频处理方法的一个前端显示示意图,图7为本公开实施例提供的动画视频处理方法的一个前端显示示意图。图6和7分别展示了单个AI角色在追逐时表现出的生动行为,如果用动画状态机实现这一效果需要大量的状态节点,传统方案中,在对AI角色进行处理的过程中,动画状态机面对复杂运动行为时其扩展性较差,同样,Motion Matching算法为了覆盖角色的各种运动行为,该算法要求录制大量的动作捕捉数据作为数据基础,以确保无论何种运动状态都能找到相对接近的动画片段,这一过程占用了大量的系统开销。
具体的,动画状态机为了表现复杂而高真实度的角色动画,需要在状态机中定义大量的状态结点,而在这些节点之间的状态转移条件也随之变得极为复杂,整个状态机成为由大量状态及其之间的转移条件构成的复杂网状结构。这不仅使得运行时的系统开销随之增加,而且导致在面临改动、状态的增删时极为困难,维护成本非常高。同样的,Matching算法为了覆盖角色的各种运动行为,该算法要求录制大量的动作捕捉数据作为数据基础,以确保无论何种运动状态都能找到相对接近的动画片段。因此在运行时从庞大的动画数据中挑选出最优者这一计算过程的性能开销很大,不利于大规模的使用AI角色,影响了用户的使用体验。
参考图8和图9,其中,图8为本公开实施例提供的动画视频处理方法一个可选的流程示意图,图9为本公开实施例提供的动画视频处理方法的显示效果示意图,具体的,为了解决上述问题,本公开实施例提了一种的动画视频处理方法,包括以下步骤:
步骤801:对动画信息进行预处理,提取关键帧和相应的运动数据。
参考图10A,图10A为本公开实施例提供的动画视频处理方法一个可选的流程示意图,包括:
步骤1001:确定游戏进程的动画需求。
步骤1002:进行动作捕捉。
步骤1003:将动作捕捉结果导入引擎。
步骤1004:计算目标对象的速度和支撑脚。
步骤1005:分割关键帧。
步骤1006:保存相应的文件。
步骤1007:确定目标对象的移动状态。
步骤1008:预测目标对象的未来速度。
步骤1009:确定实际动画视频匹配。
步骤1010:输出相应的动画。
其中,动作捕捉得到的数据被分成小段导入游戏引擎后,经过预计算被分割成更小的视频帧所组成的动画视频,并以每个视频帧所组成的动画视频的第一帧作为关键帧抽取出所属运动状态,并以此建立左右脚对应的KD Tree。在运行时,算法根据当前角色的运动状态,即由当前速度、预测未来速度和过去速度构成的六维向量去KD Tree中找出N个最接近的动画视频帧所组成的动画视频,并从中找出双脚与当前角色双脚位置最为接近的动画作为最终输出。可以仅使用极低的计算开销为与系统相匹配的高质量的locomotion动画,其中,500个AI角色的计算开销仅1ms。
其中,参考图10B和图11,图10B为本公开实施例提供的动画视频处理方法一个可选的流程示意图,图11为本公开实施例提供的动画视频处理方法的显示效果示意图,具体的,原始的Motion Matching算法通过庞大的动作捕捉数据来完成对角色不同运动行为的覆盖。而对于AI角色而言,其运动行为相比于用户控制的角色较为简单且可控性较强。因为用户所控制角色会接受用户控制输入而可能走出复杂的运动轨迹,而AI角色是按照寻路算法预先计算好的路径进行移动,而寻路算法所生成的运动轨迹一般为折线段而非弧线,所以AI角色的运动模式较为简单并且可以拆分成由若干视频帧所组成的动画视频构成。因此针对AI角色,特别是超过一定数量(针对不同的使用场景,数量可以适时调整)的AI角色无需像用户角色一样录制庞大而全面的动作捕捉数据,而仅需要录制关键的动作片段即可,例如直线走、跑的循环动画,八方向(前后左右及左前右前左后右后)的起步走、起跑动画,跑动中八方向的转向动画、急停动画等。这些基本的动画片段足以覆盖AI在沿寻路算法生成的路径移动时的运动状态。
可以按照需求录制好上述的动作捕捉片段并导入进游戏引擎后,算法可以对这些片段进行预处理,提取出关键帧及其运动数据。原始的Motion Matching算法以较高的采样频率(优选值为每秒十次)对所有的动画数据进行采样并切分出关键帧,计算对应的运动状态数据,因此生成的用于算法做动态匹配的关键帧数量非常庞大。
进一步地,结合动画状态机中的Sync Point机制,本公开的所提供的动画视频处理方法可以仅在左或者右脚着地时才生成关键帧。可以实现用于动态匹配的关键帧数量被大幅削减,降低了运行时的计算开销。同时由于当前支撑脚的信息得到明确,下一次匹配的范围也相应可以缩减,可以避免出现同一只脚连续落地两次这样的不自然现象。
关键帧的计算是通过支撑脚在地面踩实为基准进行的。具体而言,是根据两只脚的速度差决定的。定义左右脚的速度分别为V_l和V_r,则其速度差V_rel=V_l-V_r。在游戏角色的运动过程中,支撑脚落地后速度降为0而另一只脚则有正向的速度,因此当两只脚交替成为支撑脚时,其速度差会在一个负极值和一个正极值之间来回震荡,如图10B所示。易见当速度差达到负极值时,此时左脚在地面速度为零而右脚速度达到周期内最大,可定义该点为左脚踩实点,同理当速度差为正极值时为右脚踩实点。因此通过对两脚速度差的计算可以快速判定出双脚分别踩实的时间点,也即关键帧所在的时间点。这些时间点将动画片段分割成了若干个更小的视频帧所组成的动画视频,每个视 频帧所组成的动画视频以其起始的关键帧为索引,算法会抽取其当前角色速度、a秒后的角色速度,b秒前的角色速度以及当前左右脚的位置及速度作为关键运动信息,其中a和b作为可配置的时间参数。这些运动信息会保存成文件,用于运行时的动态匹配。
在本公开的一些实施例中,还可以将自定义参数保存在文件中用于适时适配,例如对于每一个动作捕捉后导入的动画片段指定其可用于生成关键帧的起始和结束时间以排除干扰性的动画帧进入匹配池。此外还能指定该片段是否是循环片段,为该片段打上标签以实现其他系统对动画系统的控制。
步骤802:根据动画信息的预处理结果和AI角色的运动状态,确定实时动画输出。
继续参考图12,图12为本公开实施例提供的动画视频处理方法一个可选的处理过程示意图,可以选取三个关键速度作为主要的匹配依据,即当前速度、a秒后的未来速度和b秒之前的过去速度,并忽略角色在垂直方向上的速度。因此每个速度实际可以表示为一个二维向量,即当前速度(V_cur_x,V_cur_y),未来速度(V_fur_x,V_fur_y),过去速度(V_pre_x,V_pre_y)。将这三个速度向量合并成一个整体,即用一个六维的向量V_char=(V_cur_x,V_cur_y,V_fur_x,V_fur_y,V_pre_x,V_pre_y)来描述角色的运动状态。同样的,每个动画关键帧也通过预计算得到其所属的六维运动向量V_anim。算法需要在所有动画关键帧中找出与V_char最接近的若干个V_anim作为最终输出动画的候选。为了便于计算,可以直接使用欧氏距离来衡量两个六维向量之间的接近程度。
进一步地,对于预计算的动画数据而言,其未来速度可以通过后续的动画计算得出,但是对于AI角色而言未来速度是未知的,因此需要通过预测算法来预测未来速度。由于AI通常情况下是沿着寻路算法计算得出的路径进行移动,因此可以通过所沿着的路径进行未来速度预测。预测算法使用角色移动的最大速度V_max和最大加速度A_max,如图12所示,假设在当前位置加速到V_max并持续一段时间,直到接近路径拐点时进行减速,而经过拐点后重新加速至V_max,其每一段(加速段、减速段、全速段)所对应的时间都可以计算出。根据这一运动方式可以计算出在a秒后,AI角色处于上述的加速-减速-再加速过程中的哪一环节,也就可以计算出a秒后的预测的未来速度。
同时,直接遍历所有的动画关键帧会带来比较沉重的计算开销,这也是传统Motion Matching的主要开销瓶颈。本方案使用KD Tree对这一步计算进行加速,即在初始化阶段将所有的动画关键帧根据其V_anim构建出一棵KD Tree,而在运行时根据V_char去该KD Tree中查找出N个最近的邻居。这将大幅降低匹配查询所需要的时间。需要注意的是,根据之前的预计算结果可以知道每一个动画关键帧的支撑脚是左脚还是右脚,建立KD Tree时也会为左右脚分别建立两颗树,匹配查询时会选择与当前支撑脚不同的KD Tree进行查询,即如果当前支撑脚是左脚,那么就仅在右脚对应的KD Tree中进行查询。这样可以保证最终的动画是两脚交替落地的,符合游戏角色的实际运动规律。在获得N个与当前运动状态最接近的候选动画视频帧所组成的动画视频后,算法会从中挑选出双脚位置与当前角色双脚位置最接近的一个视频帧所组成的动画视频作为相应的实时动画输出。
步骤803:根据实时动画输出,确定完整的动画输出。
有益技术效果:
通过本公开实施例所示的技术方案通过确定与目标对象相匹配的原始动画视频,其中,所述原始动画视频用于表征所述目标对象在不同使用场景中的运动状态;对所述原始动画视频进行预处理,获取所述原始动画视频中的关键视频帧和所述关键视频帧所对应的运动数据;根据所述关键视频帧对应的运动数据,确定与所述目标对象相匹配的运动数据集合;基于所述目标对象的实时运动状态,确定所述目标对象的位移参数;基于与所述目标对象相匹配的运动数据集合,通过所述目标对象的位移参数,获取与所述目标对象的实时运动状态相匹配的动画视频,由此,能够准确高效地在原始动画视频中获取与目标对象的实时运动状态相匹配的动画视频,相比于传统技术,在保证用户的电子设备的信息处理能力不变的情况下,所支持的AI角色数量和相应的动画质量均有大幅度提升,有效提升用户的使用体验。
以上所述,仅为本公开的实施例而已,并非用于限定本公开的保护范围,凡在本公开的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本公开的保护范围之内。
工业实用性
本申请实施例中公开了一种动画视频处理方法、装置、电子设备及存储介质,可以确定与目标对象相匹配的原始动画视频,对原始动画视频进行预处理,获取原始动画视频中的关键视频帧和关键视频帧所对应的运动数据;确定与所述目标对象相匹配的运动数据集合;确定所述目标对象的位移参数;基于与所述目标对象相匹配的运动数据集合,通过所述目标对象的位移参数,获取与所述目标对象的实时运动状态相匹配的动画视频。本公开能够准确高效地在原始动画视频中获取与目标对象的实时运动状态相匹配的动画视频,相比于传统技术,在保证用户的电子设备的信息处理能力不变的情况下,所支持的AI角色数量和相应的动画质量均有大幅度提升,有效提升用户的使用体验。

Claims (16)

  1. 一种动画视频处理方法,所述方法由电子设备执行,所述动画视频处理方法包括:
    确定与目标对象相匹配的原始动画视频,其中,所述原始动画视频用于表征所述目标对象在不同使用场景中的运动状态;
    对所述原始动画视频进行预处理,获取所述原始动画视频中的关键视频帧和所述关键视频帧所对应的运动数据;
    根据所述关键视频帧对应的运动数据,确定与所述目标对象相匹配的运动数据集合;
    基于所述目标对象的实时运动状态,确定所述目标对象的位移参数;
    基于与所述目标对象相匹配的运动数据集合,通过所述目标对象的位移参数,获取与所述目标对象的实时运动状态相匹配的动画视频。
  2. 根据权利要求1所述的方法,其中,所述确定与目标对象相匹配的原始动画视频,包括:
    确定所述目标对象相对应的动画视频输出环境;
    根据所述动画视频输出环境,确定所述目标对象在不同使用场景中的运动状态;
    根据所述目标对象在不同使用场景中的运动状态,对捕捉对象的运动动作进行动态捕捉,形成与所述目标对象相匹配的原始动画视频。
  3. 根据权利要求1所述的方法,其中,所述对所述原始动画视频进行预处理,获取所述原始动画视频中的关键视频帧和所述关键视频帧所对应的运动数据,包括:
    侦测所述原始动画视频中的所有视频帧中所述目标对象的肢体落点位置;
    当所述目标对象的肢体落点位置位于相应的水平面中,或者,
    当所述目标对象的肢体落点位置与相应的参照物接触时,确定包括所述目标对象的肢体落点位置的视频帧为关键视频帧;
    基于所述关键视频帧,确定所述目标对象在不同使用场景中的位移参数作为所述关键视频帧所对应的运动数据。
  4. 根据权利要求3所述的方法,其中,所述侦测所述原始动画视频中的所有视频帧中所述目标对象的肢体落点位置,包括:
    当所述目标对象的肢体为所述目标对象的左下肢和右下肢时,确定所述目标对象的左下肢速度和所述目标对象的右下肢速度;
    当所述目标对象的左下肢速度和所述目标对象的右下肢速度的差值达到负向极值时,确定所述述目标对象的左下肢位置位于相应的水平面中;
    当所述目标对象的左下肢速度和所述目标对象的右下肢速度的差值达到正向极值时,确定所述述目标对象的右下肢位置位于相应的水平面中。
  5. 根据权利要求4所述的方法,其中,所述方法还包括:
    当所述动画视频输出环境为AI游戏环境时,
    在所述目标对象的左脚速度和右脚速度的差值达到负向极值时,确定目标对象的左脚已经着地的视频帧为关键视频帧。
  6. 根据权利要求3所述的方法,其中,所述侦测所述原始动画视频中的所有视频帧中所述目标对象的肢体落点位置,包括:
    当所述目标对象的肢体为所述目标对象的左上肢和右上肢时,确定所述目标对象的左上肢速度和所述目标对象的右上肢速度;
    当所述目标对象的左上肢速度和所述目标对象的右上肢速度的差值达到负向极值时,确定所述述目标对象的左上肢位置与相应的参照物接触;
    当所述目标对象的左上肢速度和所述目标对象的右上肢速度的差值达到正向极值时,确定所述述目标对象的右上肢位置与相应的参照物接触。
  7. 根据权利要求1所述的方法,其中,所述基于所述目标对象的实时运动状态,确定所述目标对象的位移参数,包括:
    基于寻路算法进程,确定所述目标对象的移动路径;
    根据所述目标对象相匹配的运动数据集合,确定所述目标对象相匹配的最大位移参数和相应的最大加位移参数;
    根据所述目标对象的移动路径、所述目标对象相匹配的最大位移参数和相应的最大加位移参数,确定所述目标对象在不同时刻的位移参数。
  8. 根据权利要求1所述的方法,其中,所述基于与所述目标对象相匹配的运动数据集合,通过所述目标对象的位移参数,获取与所述目标对象的实时运动状态相匹配的动画视频,包括:
    基于所述目标对象的位移参数,确定与所述目标对象当前运动状态相对应的第一运动向量;
    基于与所述目标对象相匹配的运动数据集合,确定与每个关键视频帧分别对应的第二运动向量;
    根据所述第一运动向量,在所述第二运动向量相对应的搜索二叉树结构中确定与所述第一运动向量相匹配的第二运动向量;
    根据与所述第一运动向量相匹配的第二运动向量,确定对应的关键视频帧,并通过所确定的关键视频帧,获取与所述目标对象的实时运动状态相匹配的动画视频。
  9. 根据权利要求8所述的方法,其中,所述根据所述第一运动向量,在所述第二运动向量相对 应的搜索二叉树结构中确定与所述第一运动向量相匹配的第二运动向量,包括:
    当所述第一运动向量表征所述目标对象的左下肢位置位于相应的水平面中时,
    通过所述第二运动向量相对应的右下肢搜索二叉树结构,确定与所述第一运动向量相匹配的第二运动向量,或者,
    当所述第一运动向量表征所述目标对象的右下肢位置位于相应的水平面中时,
    通过所述第二运动向量相对应的左下肢搜索二叉树结构,确定与所述第一运动向量相匹配的第二运动向量。
  10. 根据权利要求9所述的方法,其中,所述通过所确定的关键视频帧,获取与所述目标对象的实时运动状态相匹配的动画视频,包括:
    根据所述关键视频帧,确定不同的待输出动画视频;
    确定所述不同的待输出动画视频中所述目标对象肢体落点位置与所述目标对象当前肢体落点位置距离最小的待输出动画视频为与所述目标对象的实时运动状态相匹配的动画视频。
  11. 根据权利要求1所述的方法,其中,所述方法还包括:
    获取与所述动画视频输出环境相对应的目标分辨率;
    基于所述目标分辨率,对所述与所述目标对象的实时运动状态相匹配的动画视频进行分辨率增强处理,以实现与所述目标对象的实时运动状态相匹配的动画视频与动画视频输出环境相匹配。
  12. 一种动画视频处理装置,其中,所述装置包括:
    信息传输模块,配置为确定与目标对象相匹配的原始动画视频,其中,所述原始动画视频用于表征所述目标对象在不同使用场景中的运动状态;
    信息处理模块,配置为对所述原始动画视频进行预处理,获取所述原始动画视频中的关键视频帧和所述关键视频帧所对应的运动数据;
    所述信息处理模块,配置为根据所述关键视频帧对应的运动数据,确定与所述目标对象相匹配的运动数据集合;
    所述信息处理模块,配置为基于所述目标对象的实时运动状态,确定所述目标对象的位移参数;
    所述信息处理模块,配置为基于与所述目标对象相匹配的运动数据集合,通过所述目标对象的位移参数,获取与所述目标对象的实时运动状态相匹配的动画视频。
  13. 根据权利要求12所述的装置,其中,
    所述信息处理模块,配置为确定所述目标对象相对应的动画视频输出环境;
    所述信息处理模块,配置为根据所述动画视频输出环境,确定所述目标对象在不同使用场景中的运动状态;
    所述信息处理模块,配置为根据所述目标对象在不同使用场景中的运动状态,对捕捉对象的运动动作进行动态捕捉,形成与所述目标对象相匹配的原始动画视频。
  14. 根据权利要求12所述的装置,其中,
    所述信息处理模块,配置为侦测所述原始动画视频中的所有视频帧中所述目标对象的肢体落点位置;
    所述信息处理模块,配置为当所述目标对象的肢体落点位置位于相应的水平面中,或者,
    与相应的参照物接触时,确定包括所述目标对象的肢体落点位置的视频帧为关键视频帧;
    所述信息处理模块,配置为基于所述关键视频帧,确定所述目标对象在不同使用场景中的位移参数作为所述关键视频帧所对应的运动数据。
  15. 一种电子设备,其中,所述电子设备包括:
    存储器,用于存储可执行指令;
    处理器,用于运行所述存储器存储的可执行指令时,实现权利要求1至11任一项所述的动画视频处理方法。
  16. 一种计算机可读存储介质,存储有可执行指令,其中,所述可执行指令被处理器执行时实现权利要求1至11任一项所述的动画视频处理方法。
PCT/CN2021/076159 2020-02-10 2021-02-09 一种动画视频处理方法、装置、电子设备及存储介质 WO2021160108A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/687,008 US11836841B2 (en) 2020-02-10 2022-03-04 Animation video processing method and apparatus, electronic device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010085370.5A CN111298433B (zh) 2020-02-10 2020-02-10 一种动画视频处理方法、装置、电子设备及存储介质
CN202010085370.5 2020-02-10

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/687,008 Continuation US11836841B2 (en) 2020-02-10 2022-03-04 Animation video processing method and apparatus, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
WO2021160108A1 true WO2021160108A1 (zh) 2021-08-19

Family

ID=71152741

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/076159 WO2021160108A1 (zh) 2020-02-10 2021-02-09 一种动画视频处理方法、装置、电子设备及存储介质

Country Status (3)

Country Link
US (1) US11836841B2 (zh)
CN (1) CN111298433B (zh)
WO (1) WO2021160108A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113781615A (zh) * 2021-09-28 2021-12-10 腾讯科技(深圳)有限公司 一种动画生成方法、装置、设备、存储介质及程序产品

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111298433B (zh) * 2020-02-10 2022-07-29 腾讯科技(深圳)有限公司 一种动画视频处理方法、装置、电子设备及存储介质
CN113129416A (zh) * 2020-06-22 2021-07-16 完美世界(北京)软件科技发展有限公司 动画混合空间剖分方法、装置、设备和可读介质
CN111659120B (zh) * 2020-07-16 2023-04-14 网易(杭州)网络有限公司 虚拟角色位置同步方法、装置、介质及电子设备
CN113542855B (zh) * 2021-07-21 2023-08-22 Oppo广东移动通信有限公司 视频处理方法、装置、电子设备和可读存储介质
CN113633970B (zh) * 2021-08-18 2024-03-08 腾讯科技(成都)有限公司 动作效果的显示方法、装置、设备及介质
CN113794799A (zh) * 2021-09-17 2021-12-14 维沃移动通信有限公司 视频处理方法和装置
CN117409117A (zh) * 2023-10-18 2024-01-16 北京华航唯实机器人科技股份有限公司 视向动画的生成方法及装置
CN117475041B (zh) * 2023-12-28 2024-03-29 湖南视觉伟业智能科技有限公司 一种基于rcms的数字孪生岸桥模拟方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108416013A (zh) * 2018-03-02 2018-08-17 北京奇艺世纪科技有限公司 视频匹配、检索、分类和推荐方法、装置及电子设备
CN109325456A (zh) * 2018-09-29 2019-02-12 佳都新太科技股份有限公司 目标识别方法、装置、目标识别设备及存储介质
CN109523613A (zh) * 2018-11-08 2019-03-26 腾讯科技(深圳)有限公司 数据处理方法、装置、计算机可读存储介质和计算机设备
US10388053B1 (en) * 2015-03-27 2019-08-20 Electronic Arts Inc. System for seamless animation transition
US20190381404A1 (en) * 2018-06-18 2019-12-19 Unity IPR ApS Method and system for real-time animation generation using machine learning
CN111298433A (zh) * 2020-02-10 2020-06-19 腾讯科技(深圳)有限公司 一种动画视频处理方法、装置、电子设备及存储介质

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6587574B1 (en) * 1999-01-28 2003-07-01 Koninklijke Philips Electronics N.V. System and method for representing trajectories of moving objects for content-based indexing and retrieval of visual animated data
US7460730B2 (en) * 2005-08-04 2008-12-02 Microsoft Corporation Video registration and image sequence stitching
KR100727034B1 (ko) * 2005-12-09 2007-06-12 한국전자통신연구원 3차원 공간상에서 2차원 인간형 캐릭터의 표현 및애니메이션 방법
CN101329768B (zh) * 2008-07-29 2010-07-21 浙江大学 基于背景视图合成卡通动画的方法
JP5750864B2 (ja) * 2010-10-27 2015-07-22 ソニー株式会社 画像処理装置、画像処理方法、プログラム
CN102289836B (zh) * 2011-07-25 2013-10-16 北京农业信息技术研究中心 植物动画合成方法
CN102609970B (zh) * 2011-12-19 2014-11-05 中山大学 一种基于运动元素复用的二维动画合成方法
US9305386B2 (en) * 2012-02-17 2016-04-05 Autodesk, Inc. Editable motion trajectories
KR20150075909A (ko) * 2013-12-26 2015-07-06 한국전자통신연구원 3차원 캐릭터 동작 편집방법 및 그 장치
KR102399049B1 (ko) * 2015-07-15 2022-05-18 삼성전자주식회사 전자 장치 및 전자 장치의 이미지 처리 방법
JP6775776B2 (ja) * 2017-03-09 2020-10-28 株式会社岩根研究所 自由視点移動表示装置
CN107481303B (zh) * 2017-08-07 2020-11-13 东方联合动画有限公司 一种实时动画生成方法及系统
US10864446B2 (en) * 2019-03-27 2020-12-15 Electronic Arts Inc. Automated player control takeover in a video game

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10388053B1 (en) * 2015-03-27 2019-08-20 Electronic Arts Inc. System for seamless animation transition
CN108416013A (zh) * 2018-03-02 2018-08-17 北京奇艺世纪科技有限公司 视频匹配、检索、分类和推荐方法、装置及电子设备
US20190381404A1 (en) * 2018-06-18 2019-12-19 Unity IPR ApS Method and system for real-time animation generation using machine learning
CN109325456A (zh) * 2018-09-29 2019-02-12 佳都新太科技股份有限公司 目标识别方法、装置、目标识别设备及存储介质
CN109523613A (zh) * 2018-11-08 2019-03-26 腾讯科技(深圳)有限公司 数据处理方法、装置、计算机可读存储介质和计算机设备
CN111298433A (zh) * 2020-02-10 2020-06-19 腾讯科技(深圳)有限公司 一种动画视频处理方法、装置、电子设备及存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113781615A (zh) * 2021-09-28 2021-12-10 腾讯科技(深圳)有限公司 一种动画生成方法、装置、设备、存储介质及程序产品
CN113781615B (zh) * 2021-09-28 2023-06-13 腾讯科技(深圳)有限公司 一种动画生成方法、装置、设备、存储介质

Also Published As

Publication number Publication date
US20220189094A1 (en) 2022-06-16
US11836841B2 (en) 2023-12-05
CN111298433B (zh) 2022-07-29
CN111298433A (zh) 2020-06-19

Similar Documents

Publication Publication Date Title
WO2021160108A1 (zh) 一种动画视频处理方法、装置、电子设备及存储介质
CN113365706B (zh) 使用云游戏网络的人工智能(ai)模型训练
JP5887458B1 (ja) プレイヤの移動履歴に基づいてノンプレイヤキャラクタの経路探索を行うゲームシステム等
US11110353B2 (en) Distributed training for machine learning of AI controlled virtual entities on video game clients
CN112791394B (zh) 游戏模型训练方法、装置、电子设备及存储介质
CN111260762A (zh) 一种动画实现方法、装置、电子设备和存储介质
US10758826B2 (en) Systems and methods for multi-user editing of virtual content
US11816772B2 (en) System for customizing in-game character animations by players
WO2022000971A1 (zh) 运镜模式的切换方法及装置、计算机程序、可读介质
WO2022017111A1 (zh) 图像处理方法、装置、电子设备及计算机可读存储介质
WO2022068452A1 (zh) 虚拟道具的交互处理方法、装置、电子设备及可读存储介质
US20220409998A1 (en) Request distribution system
CN112057860B (zh) 虚拟场景中激活操作控件的方法、装置、设备及存储介质
WO2023142617A1 (zh) 基于虚拟场景的射线显示方法、装置、设备以及存储介质
CN113577774A (zh) 虚拟对象生成方法、装置、电子设备及存储介质
CN105531003B (zh) 模拟装置及模拟方法
JP2023541150A (ja) 画面表示方法、装置、機器及びコンピュータプログラム
CN111389007B (zh) 一种游戏控制方法、装置、计算设备及存储介质
Lai et al. Training an agent for third-person shooter game using unity ml-agents
CN112742031A (zh) 模型训练方法、游戏测试方法、ai角色训练方法及装置
CN116196611A (zh) 基于挥手动作的体感游戏方法
US11896898B2 (en) State stream game engine
Fang et al. Implementing First-Person Shooter Game AI in WILD-SCAV with Rule-Enhanced Deep Reinforcement Learning
JP2011255114A (ja) プログラム、情報記憶媒体及び画像生成システム
Zhan et al. Cooperation Mode of Soccer Robot Game Based on Improved SARSA Algorithm

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21753893

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21753893

Country of ref document: EP

Kind code of ref document: A1