CN116109737A - Animation generation method, animation generation device, computer equipment and computer readable storage medium - Google Patents

Animation generation method, animation generation device, computer equipment and computer readable storage medium Download PDF

Info

Publication number
CN116109737A
CN116109737A CN202310240046.XA CN202310240046A CN116109737A CN 116109737 A CN116109737 A CN 116109737A CN 202310240046 A CN202310240046 A CN 202310240046A CN 116109737 A CN116109737 A CN 116109737A
Authority
CN
China
Prior art keywords
animation display
animation
virtual
preform
display page
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310240046.XA
Other languages
Chinese (zh)
Inventor
朱勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202310240046.XA priority Critical patent/CN116109737A/en
Publication of CN116109737A publication Critical patent/CN116109737A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses an animation generation method, an animation generation device, computer equipment and a computer readable storage medium, comprising the following steps: determining an animation display page to be displayed; acquiring a preform bound with each animation display region based on the binding relation between the animation display region and the corresponding preform in the animation display page; rendering corresponding virtual scene animation in each animation display area of the animation display page based on the display parameters, the binding relation and the prefabricated body corresponding to the animation display area, and playing the virtual scene animation through the animation display area; through displaying a plurality of virtual scene animations shot by a plurality of different angles and different virtual cameras at the same time point or at the same time point in the same page animation display page, the diversity of the virtual camera lens and the display angle is improved, the expression forms of the game cutscene are enriched, the rich visual expressive force is provided for the game cutscene, and the viscosity of a player can be effectively improved.

Description

Animation generation method, animation generation device, computer equipment and computer readable storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method and apparatus for generating an animation, a computer device, and a computer readable storage medium.
Background
With the continuous development of computer communication technology, a large number of popular applications of terminals such as smart phones, tablet computers and notebook computers, the terminals develop towards diversification and individuation, and become increasingly indispensable terminals in life and work, so that in order to meet the pursuit of people for mental life, entertainment games capable of being operated on the terminals are generated, for example, multi-user online tactical competitive games, large multi-user online games and other types of games developed based on client or server architecture are popular with users due to the characteristics of high smoothness, good operation hand feeling, instant combat and the like.
Currently, in order to make a player obtain a better game experience, in some games, a game scenario is generally set to increase the game experience of the player, and the player can trigger playing of the game scenario in the game by manipulating the virtual character, so as to advance the game progress or obtain certain level cues in the game. In the prior art, a plurality of virtual characters are usually displayed on a terminal screen of a player, and the movements of the virtual camera lenses are combined, so that the respective actions of the virtual characters are displayed on the terminal screen, and the set characters, special effects and effects are displayed on the terminal screen, so that the play of the game cutscene corresponding to the game scenario is realized. However, in the existing method, due to the limitation of the virtual camera lens and the display angle, the expression form of the game cutscene is single, the fatigue feeling of a player when watching the game cutscene is easily increased, the viscosity of the player is low, and the loss of the player is caused.
Disclosure of Invention
The embodiment of the application provides an animation generation method, an animation generation device, computer equipment and a computer readable storage medium, wherein virtual scene animations shot by a plurality of different angles and different virtual cameras at different time points can be displayed in the same page animation display page by adopting the expression mode of a plurality of pages of cartoons, so that the diversity of virtual camera lenses and display angles is improved, the expression forms of game cutscene are enriched, the rich visual expression is provided for the game cutscene, and the viscosity of a player can be effectively improved.
The embodiment of the application provides an animation generation method, which comprises the following steps:
determining an animation display page to be displayed, wherein the animation display page comprises a plurality of animation display areas, each animation display area in the animation display page is provided with a display parameter, and the display parameter comprises a time parameter for determining the display time of each animation display area in the animation display page;
acquiring a preform bound with the animation display region based on the binding relation between each animation display region and the corresponding preform in the animation display page, wherein one preform is configured with a designated virtual camera, a virtual game scene shot by the designated virtual camera, a virtual object arranged in the virtual game scene and motion information of the virtual object;
And rendering corresponding virtual scene animation in each animation display area of the animation display page based on the display parameters, the binding relation and the prefabricated body corresponding to the animation display area, and playing the virtual scene animation through the animation display area.
Correspondingly, the embodiment of the application also provides an animation generation device, which comprises:
a determining unit, configured to determine an animation display page to be displayed, where the animation display page includes a plurality of animation display regions, each animation display region in the animation display page is provided with a display parameter, and the display parameter includes a time parameter for determining a display time of each animation display region in the animation display page;
an obtaining unit, configured to obtain a preform bound to the animation display region based on a binding relationship between each animation display region and a corresponding preform in the animation display page, where one preform is configured with a specified virtual camera, a virtual game scene shot by the specified virtual camera, a virtual object set in the virtual game scene, and motion information of the virtual object;
And the rendering unit is used for rendering corresponding virtual scene animation in each animation display area of the animation display page based on the display parameters, the binding relation and the prefabricated body corresponding to the animation display area, and playing the virtual scene animation through the animation display area.
Accordingly, embodiments of the present application also provide a computer device including a processor, a memory, and a computer program stored on the memory and capable of running on the processor, the computer program implementing the animation generation method of any one of the above when executed by the processor.
Accordingly, embodiments of the present application also provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the animation generation method of any one of the above.
The embodiment of the application provides an animation generation method, an animation generation device, computer equipment and a computer readable storage medium, wherein the animation display page to be displayed comprises a plurality of animation display areas, each animation display area in the animation display page is provided with a display parameter, and the display parameter comprises a time parameter used for determining the display time of each animation display area in the animation display page; then, based on the binding relation between each animation display region in the animation display page and the corresponding prefabricated body, obtaining the prefabricated body bound with the animation display regions, wherein one prefabricated body is configured with a designated virtual camera, a virtual game scene shot by the designated virtual camera, a virtual object arranged in the virtual game scene and motion information of the virtual object; and finally, based on the display parameters, the binding relation and the prefabricated body corresponding to the animation display areas, rendering corresponding virtual scene animation in each animation display area of the animation display page, and playing the virtual scene animation through the animation display areas. According to the embodiment of the application, the virtual scene animation shot by the different angles and the different virtual cameras at different time points can be displayed in the same animation display page by adopting the expression mode of the multi-page cartoon, so that the diversity of the virtual camera lens and the display angle is improved, the expression mode of the game cutscene is enriched, the rich visual expression is provided for the game cutscene, and the viscosity of a player can be effectively improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a system schematic diagram of an animation generation device according to an embodiment of the present application.
Fig. 2 is a schematic flow chart of an animation generation method according to an embodiment of the present application.
Fig. 3 is a schematic structural diagram of an animation generating device according to an embodiment of the present application.
Fig. 4 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
In the prior art, a plurality of virtual characters are usually displayed on a terminal screen of a player, and the movements of the virtual camera lenses are combined, so that the respective actions of the virtual characters are displayed on the terminal screen, and the set characters, special effects and effects are displayed on the terminal screen, so that the play of the game cutscene corresponding to the game scenario is realized. However, in the existing method, due to the limitation of the virtual camera lens and the display angle, the expression form of the game cutscene is single, the fatigue feeling of a player when watching the game cutscene is easily increased, the viscosity of the player is low, and the loss of the player is caused.
In order to solve the above problems, embodiments of the present application provide an animation generation method, an animation generation device, a computer device, and a computer-readable storage medium. Specifically, the animation generation method of the embodiment of the application may be performed by a computer device, where the computer device may be a terminal. The terminal may be a terminal device such as a smart phone, a tablet computer, a notebook computer, a touch screen, a game machine, a personal computer (PC, personal Computer), a personal digital assistant (Personal Digital Assistant, PDA), and the like, and the terminal may further include a client, which may be a video application client, a music application client, a game application client, a browser client carrying a game program, or an instant messaging client, and the like.
Referring to fig. 1, fig. 1 is a schematic view of a scenario of an animation generation system provided in an embodiment of the present application, which includes a computer device, and the system may include at least one terminal, at least one server, and a network. The terminal held by the user can be connected to the server of different games through the network. A terminal is any device having computing hardware capable of supporting and executing a software product corresponding to a game. In addition, the terminal has one or more multi-touch-sensitive screens for sensing and obtaining inputs of a user through touch or slide operations performed at a plurality of points of the one or more touch-sensitive display screens. In addition, when the system includes a plurality of terminals, a plurality of servers, and a plurality of networks, different terminals may be connected to each other through different networks, through different servers. The network may be a wireless network or a wired network, such as a Wireless Local Area Network (WLAN), a Local Area Network (LAN), a cellular network, a 2G network, a 3G network, a 4G network, a 5G network, etc. In addition, the different terminals may be connected to other terminals or to a server or the like using their own bluetooth network or hotspot network.
Wherein the computer device may determine an animation display page to be displayed, the animation display page including a plurality of animation display regions, each animation display region in the animation display page being provided with a display parameter, the display parameter including a time parameter for determining a display time of each animation display region in the animation display page; then, based on the binding relation between each animation display region in the animation display page and the corresponding prefabricated body, obtaining the prefabricated body bound with the animation display regions, wherein one prefabricated body is configured with a designated virtual camera, a virtual game scene shot by the designated virtual camera, a virtual object arranged in the virtual game scene and motion information of the virtual object; and finally, based on the display parameters, the binding relation and the prefabricated body corresponding to the animation display areas, rendering corresponding virtual scene animation in each animation display area of the animation display page, and playing the virtual scene animation through the animation display areas. According to the embodiment of the application, the virtual scene animation shot by the different angles and the different virtual cameras at different time points can be displayed in the same animation display page by adopting the expression mode of the multi-page cartoon, so that the diversity of the virtual camera lens and the display angle is improved, the expression mode of the game cutscene is enriched, the rich visual expression is provided for the game cutscene, and the viscosity of a player can be effectively improved.
It should be noted that, the schematic view of the scenario of the animation generation system shown in fig. 1 is only an example, and the animation generation system and scenario described in the embodiments of the present application are for more clearly describing the technical solutions of the embodiments of the present application, and do not constitute a limitation on the technical solutions provided in the embodiments of the present application, and those skilled in the art can know that, with the evolution of the animation generation system and the appearance of a new service scenario, the technical solutions provided in the embodiments of the present application are equally applicable to similar technical problems.
The animation generation method provided by the embodiment of the application can be applied to a game engine, for example, the animation generation method provided by the embodiment of the application can be realized by using a Unity (Unity 3D) game engine. The Unity game engine is a multi-platform integrated game development tool which can enable users to easily create types of interactive contents such as three-dimensional video games, building visualizations, real-time three-dimensional animations and the like. Furthermore, in the animation generation method provided by the embodiment of the application, a time axis (Timeline) tool can be used for making, the Timeline tool is one tool in the Unity game engine, a plurality of game objects can be edited by using the Timeline tool in a time axis mode, and the tool is mainly used for making the cut scene. Specifically, in the embodiment of the present application, a prefabricated body in the Unity game engine may be used, where the prefabricated body is a resource type in the Unity game engine, and similarly to a template, a game maker may set an edited game resource in the Unity game engine to be a prefabricated body, and may use the prefabricated body template to create a plurality of instance objects. In addition, in the game, the rendering result of the virtual camera can be output to a rendering texture and used as a map, so that the rendering result of the virtual camera can be updated in real time, the map is put on a UI interface, and the rendering of the three-dimensional model into a two-dimensional picture, namely, the rendering of the 3D object on a 2D plane can be realized for display.
Specifically, the animation generation method provided by the embodiment of the application adopts the timelines to manufacture a plurality of scenario segments, then a UI interface containing a plurality of cartoon lattices is manufactured, each Timeline scenario segment can be rendered on different lattices on the UI interface, the animation generation method looks like a dynamic cartoon in a game, each lattice displays real-time rendering results of different timelines, and the UI text and pictures are combined, and the dynamic effect and the fortune mirror are added to provide rich visual expressive force and impact force for the scenario.
The embodiment of the application provides an animation generation method, an animation generation device, computer equipment and a computer readable storage medium, wherein the animation generation method can be used with a terminal, such as a smart phone, a tablet personal computer, a notebook computer or a personal computer. The animation generation method, apparatus, computer device, and storage medium are described in detail below. The following description of the embodiments is not intended to limit the preferred embodiments.
Referring to fig. 2, fig. 2 is a schematic flow chart of an animation generation method according to an embodiment of the present application, and the specific flow may be as follows:
101, determining an animation display page to be displayed, wherein the animation display page comprises a plurality of animation display areas, each animation display area in the animation display page is provided with a display parameter, and the display parameter comprises a time parameter for determining the display time of each animation display area in the animation display page.
In order to be able to display a plurality of game animations in the same animation display page, before step "determine animation display page to be displayed", the method may comprise:
acquiring a preset animation display page, and determining a plurality of preformed bodies which are required to be bound for the animation display page, wherein the animation display page comprises a plurality of animation display areas;
binding the plurality of preforms with the animation display page, and determining the binding relation between each animation display region in the animation display page and the corresponding preform to bind the corresponding preform for each animation display region in the animation display page.
In one embodiment, the step of determining the plurality of preforms to be bound to the animated display page may include:
determining the area number of the animation display areas in the animation display page;
creating a plurality of preset preforms based on the number of areas, wherein the number of the preforms of the preset preforms is the same as the number of the areas, and each preset preform is provided with playing information on a designated time axis;
and performing component adding processing on the preset preform to obtain a target preform.
In another embodiment, the step of performing the component adding process on the preset preform to obtain the target preform, the method may include:
and associating a designated virtual camera component, a designated virtual game scene component and a designated virtual object component for the preset preform to add the designated virtual camera, the virtual game scene and the virtual object in the preset preform so as to generate a target preform.
Further, after "associating the specified virtual camera component, the specified virtual game scene component, and the specified virtual object component with the preset preform to add the specified virtual camera, the virtual game scene, and the virtual object to the preset preform to generate the target preform" the method may include:
and editing the virtual object according to the appointed motion parameters to obtain the motion information of the virtual object.
In order to display the virtual scene animation under different viewing angles on the animation display page, and enrich the expression form of the game cut scene, after associating the specified virtual camera component, the specified virtual game scene component and the specified virtual object component with the preset preform to add the specified virtual camera, the virtual game scene and the virtual object in the preset preform and generate the target preform, the method may include:
And adjusting the camera shooting parameters of the appointed virtual camera according to the appointed camera shooting parameters to obtain an adjusted virtual camera with the adjusted camera shooting parameters, wherein the adjusted virtual camera is used for shooting the virtual game scene according to the adjusted camera shooting parameters.
Further, before the step of "obtaining the preset animation display page", the method may include:
acquiring a display page to be processed;
and carrying out region division processing on the to-be-processed display page according to the region division parameters to obtain an animation display page with a plurality of animation display regions.
In a specific embodiment, after performing the region division processing on the to-be-processed display page according to the region division parameter to obtain an animation display page with multiple animation display regions, the method may include:
and acquiring an area adjustment parameter, and carrying out area adjustment on a target animation display area in the animation display page according to the area adjustment parameter to obtain an adjusted animation display area.
The region adjustment parameters may include a region size adjustment parameter, a region position adjustment parameter, a region shape adjustment parameter, and a region animation adjustment parameter, where the region position adjustment parameter is used to adjust a display position of the animation display region, the region shape adjustment parameter is used to adjust a display shape of the animation display region, and the region animation adjustment parameter is used to adjust a speed line effect, a region edge effect, and a region text or picture of the animation display region, and specifically, an animation may be used to edit the animation effect of the animation display region.
Further, in the step of binding the plurality of preforms with the animation display page, determining a binding relationship between each animation display region in the animation display page and a corresponding preform, so as to bind the corresponding preform for each animation display region in the animation display page, the method may include:
and binding the plurality of the preforms and the animation display pages based on the display parameters of the animation display areas in the animation display pages and the playing information of the target preform, and determining the binding relation between the animation display areas in the animation display pages and the corresponding preforms.
Specifically, in the step of "binding the plurality of preforms with the plurality of animation display pages", the method may include:
and carrying out association processing on the prefabricated body and the root node of the animation display page so as to bind the prefabricated body and the animation display page.
According to the embodiment of the application, the events happening in each game scene or the game scenes under different view angles can be edited into a section of playable dynamic scenario through the form of the timelines and displayed in each animation display area, then the overall playing is controlled through a section of total timelines, and the elements such as text, drift word, audio and dynamic effect can be combined and added in the animation display area according to the requirements, so that the events happening dynamically under different scenes and view angles can be displayed to players.
102, acquiring a prefabricated body bound with the animation display area based on the binding relation between each animation display area and the corresponding prefabricated body in the animation display page, wherein one prefabricated body is configured with a designated virtual camera, a virtual game scene shot by the designated virtual camera, a virtual object arranged in the virtual game scene and motion information of the virtual object.
In this embodiment of the present application, the computer device may obtain, based on a binding relationship between each animation display region in the animation display page and a corresponding preform, the preform bound to the animation display region, so as to display sequentially according to a display order of the animation display regions and time axis information of a virtual scene animation corresponding to the preform when the animation display page is subsequently displayed.
And 103, rendering corresponding virtual scene animation in each animation display area of the animation display page based on the display parameters, the binding relation and the prefabricated body corresponding to the animation display area, and playing the virtual scene animation through the animation display area.
In summary, the embodiments of the present application provide an animation generating method, by determining an animation display page to be displayed, where the animation display page includes a plurality of animation display regions, each animation display region in the animation display page is provided with a display parameter, and the display parameter includes a time parameter for determining a display time of each animation display region in the animation display page; then, based on the binding relation between each animation display region in the animation display page and the corresponding prefabricated body, obtaining the prefabricated body bound with the animation display regions, wherein one prefabricated body is configured with a designated virtual camera, a virtual game scene shot by the designated virtual camera, a virtual object arranged in the virtual game scene and motion information of the virtual object; and finally, based on the display parameters, the binding relation and the prefabricated body corresponding to the animation display areas, rendering corresponding virtual scene animation in each animation display area of the animation display page, and playing the virtual scene animation through the animation display areas. According to the embodiment of the application, the virtual scene animation shot by the different angles and the different virtual cameras at different time points can be displayed in the same animation display page by adopting the expression mode of the multi-page cartoon, so that the diversity of the virtual camera lens and the display angle is improved, the expression mode of the game cutscene is enriched, the rich visual expression is provided for the game cutscene, and the viscosity of a player can be effectively improved.
In order to further explain the animation generation method provided in the embodiment of the present application, an application of the generation method of the preform in a specific implementation scenario will be described below, where the specific scenario is as follows:
(1) The computer device may preset layers according to the number of areas of the animation display region in the animation display page, and set the same number of virtual cameras according to the number of areas of the animation display region. Specifically, how many layers are preset for how many animation display areas are in the animation display page, so that the virtual cameras can distinguish the contents to be rendered, namely, each virtual camera shoots the contents to be displayed in one animation display area, thereby preventing the contents displayed in the animation display areas from being mutually intersected, and the layers can be used for enabling the virtual cameras to separately render the contents to be rendered. For example, setting a Layer of the virtual Character as a Character, the computer device may set a rendering Layer of the first virtual camera as a Character, and when there is both the virtual Character and the virtual game scene in the game screen, then the first virtual camera may only render the virtual Character. Because the content displayed on each animation display region is the image content that is photographed and rendered out by the corresponding virtual camera. For example, when a first virtual character and a second virtual character exist in a game screen, the first virtual camera can be set to shoot the first virtual character, and the second virtual camera can be set to shoot the second virtual character, so that the screen of the first virtual character is displayed in a first animation display area, and the screen of the second virtual character is displayed in a second animation display area.
(2) The game maker can add an automatically modified Layer script to the game engine. Specifically, a component for automatically setting a Layer can be added on the outermost Layer of the preform of the Timeline, and the Layer name to be modified is set, so that all sub-objects can be set as target layers, and the workload of manual setting is reduced. For example, when the Layer name of the preform is changed to the first Layer, and the first virtual camera is set to the Layer name of the first Layer by the computer device, the first virtual camera can be associated with the preform corresponding to the Layer name of the first Layer, and the preform is photographed.
(3) The game maker can add a camera for the prefabricated body in the game engine, specifically, the rendering Layer of the camera can be set as the Layer name of the virtual game scene, and the Layer name filled in by the Layer script is automatically set before, so that the camera only renders the virtual game scene and the corresponding content under the prefabricated body.
(4) The game maker may also edit timelines. Specifically, the virtual character model may be added to the preform, and then the Timeline may be edited, so that parameters such as motion information, position information, rotation information, and the like of each virtual character may be edited in the Timeline, and various camera parameters such as position information, rotation information, and view angle of the virtual camera may be edited, thereby completing the manufacture of one preform.
In order to further explain the animation generation method provided in the embodiment of the present application, an application of the animation display page generation method in a specific implementation scenario will be described below, where the specific scenario is as follows:
(1) The game maker may add a plurality of animation display regions, which may be shaped or shaped, to the animation display page in the game engine. Specifically, taking the example of adding a plurality of special-shaped animation display areas, the computer equipment adds a special-shaped component, and the positions of the UI grids can be adjusted according to the four positions of the upper left, the lower left, the upper right and the lower right, so that special-shaped display of the animation display areas is realized.
(2) The game maker may add a preform for each animation display region in the game engine to put a plurality of preforms into the animation display page.
(3) The game maker may edit the UI display interface of each animation display region in the game engine. Specifically, the computer device may obtain an area adjustment parameter, and perform area adjustment on the target animation display area in the animation display page according to the area adjustment parameter, so as to obtain an adjusted animation display area. The region adjustment parameters may include a region size adjustment parameter, a region position adjustment parameter, a region shape adjustment parameter, and a region animation adjustment parameter, where the region position adjustment parameter is used to adjust a display position of the animation display region, the region shape adjustment parameter is used to adjust a display shape of the animation display region, and the region animation adjustment parameter is used to adjust a speed line effect, a region edge effect, and a region text or picture of the animation display region, and specifically, an animation may be used to edit the animation effect of the animation display region.
(4) The game maker can edit the Timeline of the preform corresponding to the animation display region in the game engine. Specifically, the UI animation of the animation display area and the playing of the virtual scene animation corresponding to the preform in the animation display area can be controlled by editing the Timeline of the entire animation display page. The display and hiding of the prefabricated body in the animation display region, the display sequence of each animation display region and the like can be set so as to obtain the edited animation display region. And, game producer can also control whether a plurality of animation display areas are simultaneously rendered in the game engine, so as to realize the effect of simultaneously playing a plurality of scenarios.
(5) Specifically, a game producer can integrate the multi-page animation display pages in a game engine, and then edit the game scenario of the whole game cutscene by using the node diagram to play the multi-page animation display pages so as to realize that the virtual scene animations shot by a plurality of different angles and different virtual cameras at different times or at the same time point are displayed in the same animation display page.
In summary, the embodiments of the present application provide an animation generating method, by determining an animation display page to be displayed, where the animation display page includes a plurality of animation display regions, each animation display region in the animation display page is provided with a display parameter, and the display parameter includes a time parameter for determining a display time of each animation display region in the animation display page; then, based on the binding relation between each animation display region in the animation display page and the corresponding prefabricated body, obtaining the prefabricated body bound with the animation display regions, wherein one prefabricated body is configured with a designated virtual camera, a virtual game scene shot by the designated virtual camera, a virtual object arranged in the virtual game scene and motion information of the virtual object; and finally, based on the display parameters, the binding relation and the prefabricated body corresponding to the animation display areas, rendering corresponding virtual scene animation in each animation display area of the animation display page, and playing the virtual scene animation through the animation display areas. According to the embodiment of the application, the virtual scene animation shot by the different angles and the different virtual cameras at different time points can be displayed in the same animation display page by adopting the expression mode of the multi-page cartoon, so that the diversity of the virtual camera lens and the display angle is improved, the expression mode of the game cutscene is enriched, the rich visual expression is provided for the game cutscene, and the viscosity of a player can be effectively improved.
In order to better implement the above method, the embodiment of the present application may further provide an animation generating apparatus, where the animation generating apparatus may be specifically integrated into a computer device, for example, may be a computer device such as a terminal.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an animation generating device according to an embodiment of the present application, where the device includes:
a determining unit 201, configured to determine an animation display page to be displayed, where the animation display page includes a plurality of animation display regions, each animation display region in the animation display page is provided with a display parameter, and the display parameter includes a time parameter for determining a display time of each animation display region in the animation display page;
an obtaining unit 202, configured to obtain, based on a binding relationship between each animation display region in the animation display page and a corresponding preform, a preform bound to the animation display region, where one of the preforms is configured with a specified virtual camera, a virtual game scene captured by the specified virtual camera, a virtual object set in the virtual game scene, and motion information of the virtual object;
And a rendering unit 203, configured to render, in each animation display area of the animation display page, a corresponding virtual scene animation based on the display parameter, the binding relationship, and the preform corresponding to the animation display area, and play the virtual scene animation through the animation display area.
In some embodiments, the animation generation device comprises:
the first acquisition subunit is used for acquiring a preset animation display page and determining a plurality of preformed bodies which are required to be bound for the animation display page, wherein the animation display page comprises a plurality of animation display areas;
and the first determination subunit is used for binding the plurality of the preformed units with the animation display page, determining the binding relation between each animation display area in the animation display page and the corresponding preformed unit, and binding the corresponding preformed unit for each animation display area in the animation display page.
In some embodiments, the animation generation device comprises:
a second determination subunit configured to determine a region number of an animation display region in the animation display page;
a creating subunit, configured to create a plurality of preset preforms based on the number of areas, where the number of preforms of the plurality of preset preforms is the same as the number of areas, and each preset preform is provided with play information on a specified time axis;
And the adding subunit is used for carrying out component adding processing on the preset preform to obtain a target preform.
In some embodiments, the animation generation device comprises:
and the association subunit is used for associating the appointed virtual camera component, the appointed virtual game scene component and the appointed virtual object component for the preset prefabricated body so as to add the appointed virtual camera, the virtual game scene and the virtual object in the preset prefabricated body and generate a target prefabricated body.
In some embodiments, the animation generation device comprises:
and the first processing subunit is used for editing the virtual object according to the appointed motion parameters to obtain the motion information of the virtual object.
In some embodiments, the animation generation device comprises:
and the second processing subunit is used for adjusting the camera shooting parameters of the appointed virtual camera according to the appointed camera shooting parameters to obtain an adjusted virtual camera with the adjusted camera shooting parameters, wherein the adjusted virtual camera is used for shooting the virtual game scene according to the adjusted camera shooting parameters.
In some embodiments, the animation generation device comprises:
The second acquisition subunit is used for acquiring the display page to be processed;
and the third processing subunit is used for carrying out area division processing on the to-be-processed display page according to the area division parameters to obtain an animation display page with a plurality of animation display areas.
In some embodiments, the animation generation device comprises:
and the third acquisition subunit is used for acquiring the region adjustment parameters, and carrying out region adjustment on the target animation display region in the animation display page according to the region adjustment parameters to obtain an adjusted animation display region.
In some embodiments, the animation generation device comprises:
and a fourth processing subunit, configured to perform binding processing on the multiple preforms and the multiple animation display pages based on the display parameters of each animation display region in the animation display page and the playing information of the target preform, and determine a binding relationship between each animation display region in the animation display page and the corresponding preform.
In some embodiments, the animation generation device comprises:
and a fifth processing subunit, configured to perform association processing on the preform and a root node of the animation display page, so as to bind the preform and the animation display page.
The embodiment of the application discloses an animation generation device, which can determine an animation display page to be displayed through a determination unit 201, wherein the animation display page comprises a plurality of animation display areas, each animation display area in the animation display page is provided with a display parameter, and the display parameter comprises a time parameter for determining the display time of each animation display area in the animation display page; the obtaining unit 202 obtains a preform bound to the animation display region based on a binding relationship between each animation display region and a corresponding preform in the animation display page, wherein one of the preforms is configured with a specified virtual camera, a virtual game scene shot by the specified virtual camera, a virtual object set in the virtual game scene, and motion information of the virtual object; the rendering unit 203 renders corresponding virtual scene animations in each animation display region of the animation display page based on the display parameters, the binding relationship, and the prefabricated body corresponding to the animation display region, and plays the virtual scene animations through the animation display regions. According to the embodiment of the application, the virtual scene animation shot by the different angles and the different virtual cameras at different time points can be displayed in the same animation display page by adopting the expression mode of the multi-page cartoon, so that the diversity of the virtual camera lens and the display angle is improved, the expression mode of the game cutscene is enriched, the rich visual expression is provided for the game cutscene, and the viscosity of a player can be effectively improved.
Correspondingly, the embodiment of the application also provides a computer device, which can be a terminal or a server, wherein the terminal can be a terminal device such as a smart phone, a tablet computer, a notebook computer, a touch screen, a game console, a personal computer (PC, personal Computer), a personal digital assistant (Personal Digital Assistant, PDA) and the like. Fig. 4 is a schematic structural diagram of a computer device according to an embodiment of the present application, as shown in fig. 4. The computer device 300 includes a processor 301 having one or more processing cores, a memory 302 having one or more computer readable storage media, and a computer program stored on the memory 302 and executable on the processor. The processor 301 is electrically connected to the memory 302. It will be appreciated by those skilled in the art that the computer device structure shown in the figures is not limiting of the computer device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
Processor 301 is a control center of computer device 300 and utilizes various interfaces and lines to connect various portions of the overall computer device 300, and to perform various functions of computer device 300 and process data by running or loading software programs and/or modules stored in memory 302 and invoking data stored in memory 302, thereby performing overall monitoring of computer device 300.
In the embodiment of the present application, the processor 301 in the computer device 300 loads the instructions corresponding to the processes of one or more application programs into the memory 302 according to the following steps, and the processor 301 executes the application programs stored in the memory 302, so as to implement various functions:
determining an animation display page to be displayed, wherein the animation display page comprises a plurality of animation display areas, each animation display area in the animation display page is provided with a display parameter, and the display parameter comprises a time parameter for determining the display time of each animation display area in the animation display page;
acquiring a preform bound with the animation display region based on the binding relation between each animation display region and the corresponding preform in the animation display page, wherein one preform is configured with a designated virtual camera, a virtual game scene shot by the designated virtual camera, a virtual object arranged in the virtual game scene and motion information of the virtual object;
and rendering corresponding virtual scene animation in each animation display area of the animation display page based on the display parameters, the binding relation and the prefabricated body corresponding to the animation display area, and playing the virtual scene animation through the animation display area.
In an embodiment, before determining the animation display page to be displayed, the method further comprises:
acquiring a preset animation display page, and determining a plurality of preformed bodies which are required to be bound for the animation display page, wherein the animation display page comprises a plurality of animation display areas;
binding the plurality of preforms with the animation display page, and determining the binding relation between each animation display region in the animation display page and the corresponding preform to bind the corresponding preform for each animation display region in the animation display page.
In one embodiment, the determining the plurality of preforms to which the animated display page is to be bound includes:
determining the area number of the animation display areas in the animation display page;
creating a plurality of preset preforms based on the number of areas, wherein the number of the preforms of the preset preforms is the same as the number of the areas, and each preset preform is provided with playing information on a designated time axis;
and performing component adding processing on the preset preform to obtain a target preform.
In an embodiment, the performing a component adding process on the preset preform to obtain a target preform includes:
And associating a designated virtual camera component, a designated virtual game scene component and a designated virtual object component for the preset preform to add the designated virtual camera, the virtual game scene and the virtual object in the preset preform so as to generate a target preform.
In an embodiment, after associating the specified virtual camera component, the specified virtual game scene component, and the specified virtual object component with the preset preform to add the specified virtual camera, the virtual game scene, and the virtual object to the preset preform to generate the target preform, the method further includes:
and editing the virtual object according to the appointed motion parameters to obtain the motion information of the virtual object.
In an embodiment, after associating the specified virtual camera component, the specified virtual game scene component, and the specified virtual object component with the preset preform to add the specified virtual camera, the virtual game scene, and the virtual object to the preset preform to generate the target preform, the method further includes:
and adjusting the camera shooting parameters of the appointed virtual camera according to the appointed camera shooting parameters to obtain an adjusted virtual camera with the adjusted camera shooting parameters, wherein the adjusted virtual camera is used for shooting the virtual game scene according to the adjusted camera shooting parameters.
In one embodiment, before acquiring the preset animation display page, the method further includes:
acquiring a display page to be processed;
and carrying out region division processing on the to-be-processed display page according to the region division parameters to obtain an animation display page with a plurality of animation display regions.
In an embodiment, after performing region division processing on the to-be-processed display page according to the region division parameter to obtain an animation display page with a plurality of animation display regions, the method further includes:
and acquiring an area adjustment parameter, and carrying out area adjustment on a target animation display area in the animation display page according to the area adjustment parameter to obtain an adjusted animation display area.
In an embodiment, the binding process is performed on the plurality of preforms and the animation display page, and determining a binding relationship between each animation display region in the animation display page and a corresponding preform, so as to bind the corresponding preform for each animation display region in the animation display page, including:
and binding the plurality of the preforms and the animation display pages based on the display parameters of the animation display areas in the animation display pages and the playing information of the target preform, and determining the binding relation between the animation display areas in the animation display pages and the corresponding preforms.
In one embodiment, the binding the plurality of preforms with the plurality of animation display pages includes:
and carrying out association processing on the prefabricated body and the root node of the animation display page so as to bind the prefabricated body and the animation display page.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Optionally, as shown in fig. 4, the computer device 300 further includes: a touch display 303, a radio frequency circuit 304, an audio circuit 305, an input unit 306, and a power supply 307. The processor 301 is electrically connected to the touch display 303, the radio frequency circuit 304, the audio circuit 305, the input unit 306, and the power supply 307, respectively. Those skilled in the art will appreciate that the computer device structure shown in FIG. 4 is not limiting of the computer device and may include more or fewer components than shown, or may be combined with certain components, or a different arrangement of components.
The touch display 303 may be used to display a graphical user interface and receive operation instructions generated by a user acting on the graphical user interface. The touch display 303 may include a display panel and a touch panel. Wherein the display panel may be used to display information entered by a user or provided to a user as well as various graphical user interfaces of a computer device, which may be composed of graphics, text, icons, video, and any combination thereof. Alternatively, the display panel may be configured in the form of a liquid crystal display (LCD, liquid Crystal Display), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect touch operations on or near the user (such as operations on or near the touch panel by the user using any suitable object or accessory such as a finger, stylus, etc.), and generate corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts it into touch point coordinates, and sends the touch point coordinates to the processor 301, and can receive and execute commands sent from the processor 301. The touch panel may overlay the display panel, and upon detection of a touch operation thereon or thereabout, the touch panel is passed to the processor 301 to determine the type of touch event, and the processor 301 then provides a corresponding visual output on the display panel in accordance with the type of touch event. In the embodiment of the present application, the touch panel and the display panel may be integrated into the touch display screen 303 to implement the input and output functions. In some embodiments, however, the touch panel and the touch panel may be implemented as two separate components to perform the input and output functions. I.e. the touch-sensitive display 303 may also implement an input function as part of the input unit 306.
In the embodiment of the present application, the processor 301 executes an application program to generate a graphical interface on the touch display screen 303. The touch display 303 is used for presenting a graphical interface and receiving an operation instruction generated by a user acting on the graphical interface.
The radio frequency circuitry 304 may be used to transceive radio frequency signals to establish wireless communications with a network device or other computer device via wireless communications.
The audio circuit 305 may be used to provide an audio interface between a user and a computer device through a speaker, microphone. The audio circuit 305 may transmit the received electrical signal after audio data conversion to a speaker, and convert the electrical signal into a sound signal for output by the speaker; on the other hand, the microphone converts the collected sound signals into electrical signals, which are received by the audio circuit 305 and converted into audio data, which are processed by the audio data output processor 301 for transmission to, for example, another computer device via the radio frequency circuit 304, or which are output to the memory 302 for further processing. The audio circuit 305 may also include an ear bud jack to provide communication of the peripheral ear bud with the computer device.
The input unit 306 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint, iris, facial information, etc.), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
The power supply 307 is used to power the various components of the computer device 300. Alternatively, the power supply 307 may be logically connected to the processor 301 through a power management system, so as to perform functions of managing charging, discharging, and power consumption management through the power management system. The power supply 307 may also include one or more of any components, such as a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Although not shown in fig. 4, the computer device 300 may further include a camera, a sensor, a wireless fidelity module, a bluetooth module, etc., which are not described herein.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
As can be seen from the foregoing, the computer device provided in this embodiment determines an animation display page to be displayed, where the animation display page includes a plurality of animation display regions, each animation display region in the animation display page is provided with a display parameter, and the display parameter includes a time parameter for determining a display time of each animation display region in the animation display page; then, based on the binding relation between each animation display region in the animation display page and the corresponding prefabricated body, obtaining the prefabricated body bound with the animation display regions, wherein one prefabricated body is configured with a designated virtual camera, a virtual game scene shot by the designated virtual camera, a virtual object arranged in the virtual game scene and motion information of the virtual object; and finally, based on the display parameters, the binding relation and the prefabricated body corresponding to the animation display areas, rendering corresponding virtual scene animation in each animation display area of the animation display page, and playing the virtual scene animation through the animation display areas. According to the embodiment of the application, the virtual scene animation shot by the different angles and the different virtual cameras at different time points can be displayed in the same animation display page by adopting the expression mode of the multi-page cartoon, so that the diversity of the virtual camera lens and the display angle is improved, the expression mode of the game cutscene is enriched, the rich visual expression is provided for the game cutscene, and the viscosity of a player can be effectively improved.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, embodiments of the present application provide a computer readable storage medium having stored therein a plurality of computer programs that can be loaded by a processor to perform steps in any of the animation generation methods provided by the embodiments of the present application. For example, the computer program may perform the steps of:
determining an animation display page to be displayed, wherein the animation display page comprises a plurality of animation display areas, each animation display area in the animation display page is provided with a display parameter, and the display parameter comprises a time parameter for determining the display time of each animation display area in the animation display page;
acquiring a preform bound with the animation display region based on the binding relation between each animation display region and the corresponding preform in the animation display page, wherein one preform is configured with a designated virtual camera, a virtual game scene shot by the designated virtual camera, a virtual object arranged in the virtual game scene and motion information of the virtual object;
And rendering corresponding virtual scene animation in each animation display area of the animation display page based on the display parameters, the binding relation and the prefabricated body corresponding to the animation display area, and playing the virtual scene animation through the animation display area.
In an embodiment, before determining the animation display page to be displayed, the method further comprises:
acquiring a preset animation display page, and determining a plurality of preformed bodies which are required to be bound for the animation display page, wherein the animation display page comprises a plurality of animation display areas;
binding the plurality of preforms with the animation display page, and determining the binding relation between each animation display region in the animation display page and the corresponding preform to bind the corresponding preform for each animation display region in the animation display page.
In one embodiment, the determining the plurality of preforms to which the animated display page is to be bound includes:
determining the area number of the animation display areas in the animation display page;
creating a plurality of preset preforms based on the number of areas, wherein the number of the preforms of the preset preforms is the same as the number of the areas, and each preset preform is provided with playing information on a designated time axis;
And performing component adding processing on the preset preform to obtain a target preform.
In an embodiment, the performing a component adding process on the preset preform to obtain a target preform includes:
and associating a designated virtual camera component, a designated virtual game scene component and a designated virtual object component for the preset preform to add the designated virtual camera, the virtual game scene and the virtual object in the preset preform so as to generate a target preform.
In an embodiment, after associating the specified virtual camera component, the specified virtual game scene component, and the specified virtual object component with the preset preform to add the specified virtual camera, the virtual game scene, and the virtual object to the preset preform to generate the target preform, the method further includes:
and editing the virtual object according to the appointed motion parameters to obtain the motion information of the virtual object.
In an embodiment, after associating the specified virtual camera component, the specified virtual game scene component, and the specified virtual object component with the preset preform to add the specified virtual camera, the virtual game scene, and the virtual object to the preset preform to generate the target preform, the method further includes:
And adjusting the camera shooting parameters of the appointed virtual camera according to the appointed camera shooting parameters to obtain an adjusted virtual camera with the adjusted camera shooting parameters, wherein the adjusted virtual camera is used for shooting the virtual game scene according to the adjusted camera shooting parameters.
In one embodiment, before acquiring the preset animation display page, the method further includes:
acquiring a display page to be processed;
and carrying out region division processing on the to-be-processed display page according to the region division parameters to obtain an animation display page with a plurality of animation display regions.
In an embodiment, after performing region division processing on the to-be-processed display page according to the region division parameter to obtain an animation display page with a plurality of animation display regions, the method further includes:
and acquiring an area adjustment parameter, and carrying out area adjustment on a target animation display area in the animation display page according to the area adjustment parameter to obtain an adjusted animation display area.
In an embodiment, the binding process is performed on the plurality of preforms and the animation display page, and determining a binding relationship between each animation display region in the animation display page and a corresponding preform, so as to bind the corresponding preform for each animation display region in the animation display page, including:
And binding the plurality of the preforms and the animation display pages based on the display parameters of the animation display areas in the animation display pages and the playing information of the target preform, and determining the binding relation between the animation display areas in the animation display pages and the corresponding preforms.
In one embodiment, the binding the plurality of preforms with the plurality of animation display pages includes:
and carrying out association processing on the prefabricated body and the root node of the animation display page so as to bind the prefabricated body and the animation display page.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Wherein the storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
Thanks to the computer program stored in the storage medium, the steps in any of the animation generation methods provided in the embodiments of the present application may be performed by determining an animation display page to be displayed, wherein the animation display page includes a plurality of animation display regions, each of the animation display regions in the animation display page is provided with a display parameter including a time parameter for determining a display time of each of the animation display regions in the animation display page; then, based on the binding relation between each animation display region in the animation display page and the corresponding prefabricated body, obtaining the prefabricated body bound with the animation display regions, wherein one prefabricated body is configured with a designated virtual camera, a virtual game scene shot by the designated virtual camera, a virtual object arranged in the virtual game scene and motion information of the virtual object; and finally, based on the display parameters, the binding relation and the prefabricated body corresponding to the animation display areas, rendering corresponding virtual scene animation in each animation display area of the animation display page, and playing the virtual scene animation through the animation display areas. According to the embodiment of the application, the virtual scene animation shot by the different angles and the different virtual cameras at different time points can be displayed in the same animation display page by adopting the expression mode of the multi-page cartoon, so that the diversity of the virtual camera lens and the display angle is improved, the expression mode of the game cutscene is enriched, the rich visual expression is provided for the game cutscene, and the viscosity of a player can be effectively improved.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
The foregoing describes in detail a method, apparatus, computer device and computer readable storage medium for generating an animation according to the embodiments of the present application, and specific examples are applied to illustrate the principles and embodiments of the present application, where the foregoing description of the embodiments is only used to help understand the technical solution and core idea of the present application; those of ordinary skill in the art will appreciate that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.

Claims (13)

1. An animation generation method, comprising:
determining an animation display page to be displayed, wherein the animation display page comprises a plurality of animation display areas, each animation display area in the animation display page is provided with a display parameter, and the display parameter comprises a time parameter for determining the display time of each animation display area in the animation display page;
Acquiring a preform bound with the animation display region based on the binding relation between each animation display region and the corresponding preform in the animation display page, wherein one preform is configured with a designated virtual camera, a virtual game scene shot by the designated virtual camera, a virtual object arranged in the virtual game scene and motion information of the virtual object;
and rendering corresponding virtual scene animation in each animation display area of the animation display page based on the display parameters, the binding relation and the prefabricated body corresponding to the animation display area, and playing the virtual scene animation through the animation display area.
2. The animation generation method according to claim 1, further comprising, before determining an animated display page to be displayed:
acquiring a preset animation display page, and determining a plurality of preformed bodies which are required to be bound for the animation display page, wherein the animation display page comprises a plurality of animation display areas;
binding the plurality of preforms with the animation display page, and determining the binding relation between each animation display region in the animation display page and the corresponding preform to bind the corresponding preform for each animation display region in the animation display page.
3. The animation generation method of claim 2, wherein the determining the plurality of preforms to which the animated display page is to be bound comprises:
determining the area number of the animation display areas in the animation display page;
creating a plurality of preset preforms based on the number of areas, wherein the number of the preforms of the preset preforms is the same as the number of the areas, and each preset preform is provided with playing information on a designated time axis;
and performing component adding processing on the preset preform to obtain a target preform.
4. The animation generation method according to claim 3, wherein the performing a component addition process on the preset preform to obtain a target preform comprises:
and associating a designated virtual camera component, a designated virtual game scene component and a designated virtual object component for the preset preform to add the designated virtual camera, the virtual game scene and the virtual object in the preset preform so as to generate a target preform.
5. The animation generation method of claim 4, further comprising, after associating a specified virtual camera component, a specified virtual game scene component, and a specified virtual object component for the preset preform to add a specified virtual camera, a virtual game scene, and a virtual object in the preset preform to generate a target preform:
And editing the virtual object according to the appointed motion parameters to obtain the motion information of the virtual object.
6. The animation generation method of claim 4, further comprising, after associating a specified virtual camera component, a specified virtual game scene component, and a specified virtual object component for the preset preform to add a specified virtual camera, a virtual game scene, and a virtual object in the preset preform to generate a target preform:
and adjusting the camera shooting parameters of the appointed virtual camera according to the appointed camera shooting parameters to obtain an adjusted virtual camera with the adjusted camera shooting parameters, wherein the adjusted virtual camera is used for shooting the virtual game scene according to the adjusted camera shooting parameters.
7. The animation generation method of claim 4, further comprising, prior to acquiring the preset animation display page:
acquiring a display page to be processed;
and carrying out region division processing on the to-be-processed display page according to the region division parameters to obtain an animation display page with a plurality of animation display regions.
8. The animation generation method according to claim 7, wherein after performing region division processing on the to-be-processed display page according to a region division parameter to obtain an animation display page having a plurality of animation display regions, further comprising:
and acquiring an area adjustment parameter, and carrying out area adjustment on a target animation display area in the animation display page according to the area adjustment parameter to obtain an adjusted animation display area.
9. The method of generating an animation according to claim 7, wherein the binding the plurality of preforms with the animation display page, determining a binding relationship between each animation display region in the animation display page and a corresponding preform to bind the corresponding preform for each animation display region in the animation display page, comprises:
and binding the plurality of the preforms and the animation display pages based on the display parameters of the animation display areas in the animation display pages and the playing information of the target preform, and determining the binding relation between the animation display areas in the animation display pages and the corresponding preforms.
10. The animation generation method of claim 9, wherein the binding the plurality of preforms with the plurality of animation display pages comprises:
and carrying out association processing on the prefabricated body and the root node of the animation display page so as to bind the prefabricated body and the animation display page.
11. An animation generation device, comprising:
a determining unit, configured to determine an animation display page to be displayed, where the animation display page includes a plurality of animation display regions, each animation display region in the animation display page is provided with a display parameter, and the display parameter includes a time parameter for determining a display time of each animation display region in the animation display page;
an obtaining unit, configured to obtain a preform bound to the animation display region based on a binding relationship between each animation display region and a corresponding preform in the animation display page, where one preform is configured with a specified virtual camera, a virtual game scene shot by the specified virtual camera, a virtual object set in the virtual game scene, and motion information of the virtual object;
And the rendering unit is used for rendering corresponding virtual scene animation in each animation display area of the animation display page based on the display parameters, the binding relation and the prefabricated body corresponding to the animation display area, and playing the virtual scene animation through the animation display area.
12. A computer device comprising a processor, a memory and a computer program stored on the memory and capable of running on the processor, which when executed by the processor implements the animation generation method of any of claims 1 to 10.
13. A computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the animation generation method according to any of claims 1 to 10.
CN202310240046.XA 2023-03-07 2023-03-07 Animation generation method, animation generation device, computer equipment and computer readable storage medium Pending CN116109737A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310240046.XA CN116109737A (en) 2023-03-07 2023-03-07 Animation generation method, animation generation device, computer equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310240046.XA CN116109737A (en) 2023-03-07 2023-03-07 Animation generation method, animation generation device, computer equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN116109737A true CN116109737A (en) 2023-05-12

Family

ID=86254437

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310240046.XA Pending CN116109737A (en) 2023-03-07 2023-03-07 Animation generation method, animation generation device, computer equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN116109737A (en)

Similar Documents

Publication Publication Date Title
CN112037311B (en) Animation generation method, animation playing method and related devices
WO2021258994A1 (en) Method and apparatus for displaying virtual scene, and device and storage medium
CN112351302A (en) Live broadcast interaction method and device based on cloud game and storage medium
CN112156464B (en) Two-dimensional image display method, device and equipment of virtual object and storage medium
CN112206517B (en) Rendering method, rendering device, storage medium and computer equipment
CN112891943B (en) Lens processing method and device and readable storage medium
CN113485617B (en) Animation display method and device, electronic equipment and storage medium
CN112261481A (en) Interactive video creating method, device and equipment and readable storage medium
KR20240027071A (en) Spatialized audio chat in the virtual metaverse
US11645805B2 (en) Animated faces using texture manipulation
US20230347240A1 (en) Display method and apparatus of scene picture, terminal, and storage medium
CN115888101A (en) Virtual role state switching method and device, storage medium and electronic equipment
CN115526967A (en) Animation generation method and device for virtual model, computer equipment and storage medium
CN116109737A (en) Animation generation method, animation generation device, computer equipment and computer readable storage medium
CN114225412A (en) Information processing method, information processing device, computer equipment and storage medium
CN115193043A (en) Game information sending method and device, computer equipment and storage medium
CN113350801A (en) Model processing method and device, storage medium and computer equipment
CN114900679B (en) Three-dimensional model display method and device, electronic equipment and readable storage medium
KR102396060B1 (en) Changing Camera View in Electronic Games
CN116385609A (en) Method and device for processing special effect animation, computer equipment and storage medium
CN115631276A (en) Animation production method, animation production device, computer equipment and computer readable storage medium
CN117815670A (en) Scene element collaborative construction method, device, computer equipment and storage medium
CN117689780A (en) Animation generation method and device of virtual model, computer equipment and storage medium
CN117205555A (en) Game interface display method, game interface display device, electronic equipment and readable storage medium
CN117942556A (en) Game center adjusting method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination