CN111179384A - Method and device for showing main body - Google Patents

Method and device for showing main body Download PDF

Info

Publication number
CN111179384A
CN111179384A CN201911402444.7A CN201911402444A CN111179384A CN 111179384 A CN111179384 A CN 111179384A CN 201911402444 A CN201911402444 A CN 201911402444A CN 111179384 A CN111179384 A CN 111179384A
Authority
CN
China
Prior art keywords
main body
role
animation
generating
attributes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911402444.7A
Other languages
Chinese (zh)
Inventor
金少博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
People's Happiness Co ltd
Original Assignee
Beijing Kingsoft Internet Security Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kingsoft Internet Security Software Co Ltd filed Critical Beijing Kingsoft Internet Security Software Co Ltd
Priority to CN201911402444.7A priority Critical patent/CN111179384A/en
Publication of CN111179384A publication Critical patent/CN111179384A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a main body display method and a main body display device, wherein the method comprises the following steps: acquiring a role attribute and a role skill attribute corresponding to the main body identification; generating a plurality of key static frames corresponding to the main body identification according to the role attributes and the role skill attributes; loading a plurality of key static frames according to an editor, and recording an animation example corresponding to the main body identification; and generating a resource management script according to the animation example corresponding to the main body identification, and calling the resource management script to display the animation example according to the playing request of the user. Therefore, each main body is visually displayed based on the form of the animation example, the selection of the main body by a user is facilitated, and the display efficiency of the main body is improved on the basis of ensuring the visual display of the main body.

Description

Method and device for showing main body
Technical Field
The present application relates to the field of vision processing technologies, and in particular, to a method and an apparatus for displaying a subject.
Background
Generally, in application scenes such as games or chats, various subjects are provided for users to select, and the chatting or game applications represent the avatar of the user with the subject selected by the user.
In the related art, in order to facilitate the user to select the main body, the introduction information of each main body is displayed in a text form, so that the user can select the main body according to the introduction information, and the form of the introduction main body is obviously not intuitive enough.
Disclosure of Invention
The application provides a method and a device for displaying a main body, which are used for solving the technical problem that the main body display is not intuitive in the prior art, so that a user is influenced to select the main body.
An embodiment of an aspect of the present application provides a method for displaying a main body, including: acquiring a role attribute and a role skill attribute corresponding to the main body identification; generating a plurality of key static frames corresponding to the main body identification according to the role attributes and the role skill attributes; loading the plurality of key static frames according to an editor, and recording animation examples corresponding to the main body identification; and generating a resource management script according to the animation example corresponding to the main body identification, and calling the resource management script to display the animation example according to a playing request of a user.
In addition, the subject presentation method according to the embodiment of the present application further includes the following additional technical features:
in a possible implementation manner of the embodiment of the present application, the generating, according to the role attribute and the role skill attribute, a plurality of key static frames corresponding to the body identifier includes: determining role parameters of a plurality of angles according to the role attributes corresponding to the main body identification, and generating a plurality of corresponding first static frames according to the role parameters of the plurality of angles; determining a plurality of time point equipment rendering parameters according to the role skill attributes corresponding to the main body identification, and generating a plurality of corresponding second static frames according to the plurality of time point equipment rendering parameters; determining a plurality of role device fusion parameters corresponding to the trigger conditions according to the role attributes and the role skill attributes corresponding to the main body identification, and generating a plurality of corresponding third static frames according to the plurality of role device fusion parameters corresponding to the trigger conditions.
In a possible implementation manner of this embodiment of the present application, the generating a resource management script according to the animation example corresponding to the subject identifier includes: adding a plurality of animation editing strategies at a root node, and adding a corresponding relation between a main body identifier and an animation example identifier which need to be loaded at a child node.
In a possible implementation manner of the embodiment of the present application, the invoking the resource management script according to the play request of the user to display the animation example includes: receiving a main body preview playing request sent by a user; and querying a plurality of animation editing strategies added by the root node to call an overall playing strategy, acquiring animation example identifications on child nodes in sequence, and playing the animation examples corresponding to the animation example identifications.
In a possible implementation manner of the embodiment of the present application, the invoking the resource management script according to the play request of the user to display the animation example includes: receiving a main body subscription playing request which is sent by a user and carries user preference characteristics; inquiring a plurality of animation editing strategies added by the root node to call a searching and playing strategy, and acquiring a target subject identifier successfully matched with the user preference characteristics; and acquiring a corresponding target animation example identifier on the child node according to the target main body identifier, and playing a target animation example corresponding to the target animation example identifier.
An embodiment of an aspect of the present application provides a main body display device, including: the acquiring module is used for acquiring the role attribute and the role skill attribute corresponding to the main body identifier; the generating module is used for generating a plurality of key static frames corresponding to the main body identification according to the role attributes and the role skill attributes; the recording module is used for loading the plurality of key static frames according to an editor and recording the animation examples corresponding to the main body identification; the generation module is further used for generating a resource management script according to the animation example corresponding to the main body identification; and the display module is used for calling the resource management script to display the animation example according to the playing request of the user.
In addition, the main body display device of the embodiment of the present application further includes the following additional technical features:
in a possible implementation manner of the embodiment of the present application, the generating module is specifically configured to: determining role parameters of a plurality of angles according to the role attributes corresponding to the main body identification, and generating a plurality of corresponding first static frames according to the role parameters of the plurality of angles; determining a plurality of time point equipment rendering parameters according to the role skill attributes corresponding to the main body identification, and generating a plurality of corresponding second static frames according to the plurality of time point equipment rendering parameters; determining a plurality of role device fusion parameters corresponding to the trigger conditions according to the role attributes and the role skill attributes corresponding to the main body identification, and generating a plurality of corresponding third static frames according to the plurality of role device fusion parameters corresponding to the trigger conditions.
In a possible implementation manner of the embodiment of the present application, the generating module is specifically configured to: adding a plurality of animation editing strategies at a root node, and adding a corresponding relation between a main body identifier and an animation example identifier which need to be loaded at a child node.
In a possible implementation manner of the embodiment of the present application, the display module is specifically configured to: receiving a main body preview playing request sent by a user; and querying a plurality of animation editing strategies added by the root node to call an overall playing strategy, acquiring animation example identifications on child nodes in sequence, and playing the animation examples corresponding to the animation example identifications.
In a possible implementation manner of the embodiment of the present application, the display module is specifically configured to: receiving a main body subscription playing request which is sent by a user and carries user preference characteristics; inquiring a plurality of animation editing strategies added by the root node to call a searching and playing strategy, and acquiring a target subject identifier successfully matched with the user preference characteristics; and acquiring a corresponding target animation example identifier on the child node according to the target main body identifier, and playing a target animation example corresponding to the target animation example identifier.
Another embodiment of the present application provides an electronic device, including a processor and a memory; wherein the processor executes a program corresponding to the executable program code by reading the executable program code stored in the memory, so as to implement the subject presentation method according to the above embodiment.
Another embodiment of the present application provides a non-transitory computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the subject presentation method according to the foregoing embodiment.
The technical scheme provided by the embodiment of the application at least has the following technical effects:
the method comprises the steps of obtaining role attributes and role skill attributes corresponding to a main body identification, generating a plurality of key static frames corresponding to the main body identification according to the role attributes and the role skill attributes, loading the plurality of key static frames according to an editor, recording animation examples corresponding to the main body identification, further generating a resource management script according to the animation examples corresponding to the main body identification, and calling the resource management script to display the animation examples according to a playing request of a user. Therefore, each main body is visually displayed based on the form of the animation example, the selection of the main body by a user is facilitated, and the display efficiency of the main body is improved on the basis of ensuring the visual display of the main body.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flow diagram of a subject presentation method according to one embodiment of the present application;
FIG. 2-1 is a schematic diagram of a resource management script generation scenario, according to one embodiment of the present application;
FIG. 2-2 is a schematic diagram of a resource management script generation scenario according to another embodiment of the present application;
FIG. 3 is a schematic structural diagram of a body presentation apparatus according to one embodiment of the present application; and
FIG. 4 illustrates a block diagram of an exemplary electronic device suitable for use in implementing embodiments of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
A subject presentation method and apparatus of an embodiment of the present application are described below with reference to the drawings. The main body in the embodiment of the application can be understood as a virtual character, a virtual animal or a virtual object and the like used in scenes such as games or chats, the main body corresponds to an avatar of a user in the scenes such as games and the like, usually, the avatar needs to be selected by the user in advance, and the main body is visually displayed in the application so as to facilitate the selection of the main body of the user.
Fig. 1 is a flowchart of a subject presentation method according to an embodiment of the present application, as shown in fig. 1, the method including:
step 101, obtaining role attributes and role skill attributes corresponding to the subject identification.
The main body identifier may be information for uniquely identifying the main body, such as a main body name and a main body number, and the main body generally includes corresponding character attributes and character skill attributes, which are set in a matching manner when the main body is designed, where the character attributes include appearance information of the character, such as hair style information, color information, clothing information, and the like, and the character skill attributes include appearance information of a weapon of the main body, animation information corresponding to attack skills, and the like.
Specifically, the role attribute and the role skill attribute corresponding to the subject identifier may be obtained in the database.
And 102, generating a plurality of key static frames corresponding to the main body identification according to the role attributes and the role skill attributes.
Specifically, a plurality of static frames corresponding to the subject identifier are generated according to the role attributes and the role skill attributes, wherein the plurality of key static frames reflect the basic conditions of the role attributes and the role skill attributes.
As a possible implementation manner, role parameters of multiple angles are determined according to role attributes corresponding to the main body identifier, where the role parameters include shape information of the main body at the corresponding angle, for example, hair style information of the main body at the corresponding angle, and multiple corresponding first static frames are generated according to the role parameters of the multiple angles, where the multiple first static frames represent the shape information of the main body corresponding to the multiple angles.
Furthermore, a plurality of time point device rendering parameters are determined according to the character skill attribute corresponding to the subject identifier, where a time point here can be understood as each time period when the corresponding skill is running, such as a time point of a skill running starting stage, a time point of an intermediate stage, a time point of an ending stage, and the like, the device rendering parameters include animation, color, rendering special effect, and the like information rendered by the corresponding skill at the corresponding time point, and a plurality of corresponding second static frames are generated according to the plurality of time point device rendering parameters, where the plurality of second static frames reflect a visual effect of the corresponding skill when running.
And finally, determining a plurality of role device fusion parameters corresponding to the trigger conditions according to the role attributes and the role skill attributes corresponding to the main body identification, and generating a plurality of corresponding third static frames according to the plurality of role device fusion parameters corresponding to the trigger conditions. The triggering condition is used for corresponding the role attribute and the role skill attribute, for example, if the triggering condition is a back broadcast attack, the corresponding role attribute is main body shape information corresponding to a back angle, the role skill attribute is an effect rendered by an equipment rendering parameter corresponding to a light wave attack, and if the triggering condition is the back broadcast attack, the corresponding role attribute and the role skill attribute are combined together, the corresponding visual effect of the back light wave attack is achieved. The plurality of third static frames reflect the visual effect of combining the character attribute and the corresponding character skill attribute to a certain extent.
As another possible implementation manner, the main body may be controlled to implement different skills, and the key static frame is generated according to the video frames of the multiple main bodies implementing different skills, for example, an area where the main body is located in the video frame and a coverage area when the corresponding character skill attribute is implemented are identified, the area candidate is identified according to an entity identification technology, and the multiple key static frames are generated by combining a solid background and the like.
And 103, loading a plurality of key static frames according to the editor, and recording the animation example corresponding to the main body identification.
The editor may be an aniomat or the like, and the editor may generate the animation example according to the plurality of key static frames, that is, may load the plurality of key static frames according to the editor, and record the animation example corresponding to the subject identifier.
Each animation example not only can intuitively show the appearance of the corresponding main body, but also can show the corresponding skill visual effect, and the animation examples are only generated by a plurality of key static frames, so that the generation efficiency is high, and the memory occupation is low.
And 104, generating a resource management script according to the animation example corresponding to the main body identification, and calling the resource management script to display the animation example according to the playing request of the user.
Specifically, the resource management script is generated according to the animation example corresponding to the body identifier, and the resource management script is invoked according to the play request of the user to display the animation example, that is, the resource management script is invoked in the form of the resource management script to display the animation example, where the play invocation request of the user may be sent in a voice form or sent by triggering a corresponding control.
It should be noted that, in different application scenarios, the manner of generating the resource management script according to the animation example corresponding to the body identifier is different, and the example is as follows:
as a possible implementation manner, a plurality of animation editing strategies are added to the root node, the corresponding relation between the main body identification and the animation example identification which need to be loaded is added to the child nodes, a tree structure is constructed, wherein the root node of the tree structure corresponds to the animation editing strategy, the child node corresponds to the corresponding relation between the subject identification and the animation example identification, wherein the animation editing strategy corresponds to the playing rule of the animation, including the playing speed, the number of replays, the default volume of playing, and how the corresponding animation editing strategy is played, such as sequential playing, random playing, search playing, etc., wherein, that is, as shown in fig. 2-1, one animation editing strategy may correspond to all child nodes, alternatively, as shown in FIG. 2-2, an animation editing strategy corresponds to a portion of the child nodes.
In this implementation, when a user selects a main body, all animation examples may be presented for the user to select, specifically, a main body preview play request sent by the user is received, where the main body preview play request may be triggered by the user through voice or triggered by the user through triggering a preview control, and further, a plurality of animation editing policies added by a root node are queried according to the main body preview play request to invoke an overall play policy, where the overall play policy may be to obtain animation example identifiers on child nodes in order according to an order of the root node, and play an animation example corresponding to the animation example identifier. And the playing mode of each animation example is determined by the corresponding animation editing strategy.
In the present example, the user's preference characteristics can also be adapted to show an animation example, that is, a main body subscription playing request carrying the user preference characteristics sent by the user is received, wherein, the user preference characteristics can be obtained by collecting the face image of the user, analyzing the gender and the like of the user according to the face image, the query database is matched with the user or the user can actively add the query database when sending out the main body subscription request, after a main body subscription playing request carrying user preference characteristics is acquired, a plurality of animation editing strategies added by a root node are inquired to call a search playing strategy, the search playing strategy is used for acquiring target subject identifiers successfully matched with the user preference characteristics, and it is easy to understand here that each subject identifier is preset with corresponding preference attribute information and the like, thus, the preference attribute information may be matched with the user preference characteristics to determine the target subject identification. And finally, acquiring a corresponding target animation identifier on the child node according to the target main body identifier, and playing a target animation example corresponding to the target animation example identifier.
As another possible implementation manner, a resource management script corresponding to each animation example is generated, a playing keyword of each animation example is constructed, and after a playing request of a user is obtained, the keyword in the playing request of the user is analyzed, for example, if the playing request of the user is a "please see a character bar of a woman" sent by voice, the keyword "woman" is matched with the corresponding animation example to play.
The application of the embodiment of the application is mainly applied to a multi-user interaction scene such as games or social contact, so that in order to increase the interaction sense among users and further improve the user experience, the friend relationship of the user can be inquired, and when the user selects the animation example, the animation example selected by the corresponding friend is displayed for the user, so that the user can conveniently select the animation example by referring to the animation example selected by the friend.
To sum up, the body presentation method according to the embodiment of the present application obtains a role attribute and a role skill attribute corresponding to a body identifier, generates a plurality of key static frames corresponding to the body identifier according to the role attribute and the role skill attribute, loads the plurality of key static frames according to an editor, records an animation example corresponding to the body identifier, generates a resource management script according to the animation example corresponding to the body identifier, and calls the resource management script to present the animation example according to a play request of a user. Therefore, each main body is visually displayed based on the form of the animation example, the selection of the main body by a user is facilitated, and the display efficiency of the main body is improved on the basis of ensuring the visual display of the main body.
In order to realize the above embodiments, the present application also provides a body presentation apparatus. Fig. 3 is a schematic structural view of a body presentation apparatus according to an embodiment of the present application, as shown in fig. 3, the body presentation apparatus including: the system comprises an acquisition module 100, a generation module 200, a recording module 300 and a display module 400, wherein the acquisition module 100 is used for acquiring role attributes and role skill attributes corresponding to a subject identifier;
a generating module 200, configured to generate multiple key static frames corresponding to the subject identifier according to the role attribute and the role skill attribute;
the recording module 300 is configured to load a plurality of key static frames according to an editor, and record an animation example corresponding to the body identifier;
the generating module 200 is further configured to generate a resource management script according to the animation example corresponding to the subject identifier;
and the presentation module 400 is configured to invoke the resource management script to present the animation example according to the play request of the user.
In an embodiment of the present application, the generating module 200 is specifically configured to:
determining role parameters of a plurality of angles according to the role attributes corresponding to the main body identification, and generating a plurality of corresponding first static frames according to the role parameters of the plurality of angles;
determining a plurality of time point equipment rendering parameters according to the role skill attributes corresponding to the main body identification, and generating a plurality of corresponding second static frames according to the plurality of time point equipment rendering parameters;
and determining a plurality of role equipment fusion parameters corresponding to the trigger conditions according to the role attributes and the role skill attributes corresponding to the main body identification, and generating a plurality of corresponding third static frames according to the plurality of role equipment fusion parameters corresponding to the trigger conditions.
In an embodiment of the present application, the generating module 200 is specifically configured to:
adding a plurality of animation editing strategies at a root node, and adding a corresponding relation between a main body identifier and an animation example identifier which need to be loaded at a child node.
In this embodiment, the display module 400 is specifically configured to:
receiving a main body preview playing request sent by a user;
and querying a plurality of animation editing strategies added by the root node to call an overall playing strategy, acquiring animation example identifications on the child nodes in sequence, and playing the animation examples corresponding to the animation example identifications.
In this embodiment, the display module 400 is specifically configured to:
receiving a main body subscription playing request which is sent by a user and carries user preference characteristics;
inquiring a plurality of animation editing strategies added by the root node to call a searching and playing strategy, and acquiring a target main body identifier successfully matched with the user preference characteristics;
and acquiring a corresponding target animation example identifier on the child node according to the target main body identifier, and playing the target animation example corresponding to the target animation example identifier.
It should be noted that the foregoing explanation of the body displaying method is also applicable to the body displaying apparatus in the embodiment of the present application, and the implementation principle thereof is similar and will not be described herein again.
To sum up, the body presentation apparatus according to the embodiment of the present application obtains a role attribute and a role skill attribute corresponding to a body identifier, generates a plurality of key static frames corresponding to the body identifier according to the role attribute and the role skill attribute, loads the plurality of key static frames according to an editor, records an animation example corresponding to the body identifier, generates a resource management script according to the animation example corresponding to the body identifier, and calls the resource management script to present the animation example according to a play request of a user. Therefore, each main body is visually displayed based on the form of the animation example, the selection of the main body by a user is facilitated, and the display efficiency of the main body is improved on the basis of ensuring the visual display of the main body.
In order to implement the foregoing embodiments, an electronic device is further provided in an embodiment of the present application, including a processor and a memory;
wherein, the processor runs the program corresponding to the executable program code by reading the executable program code stored in the memory, so as to implement the subject presentation method as described in the above embodiments.
FIG. 4 illustrates a block diagram of an exemplary electronic device suitable for use in implementing embodiments of the present application. The electronic device 12 shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in FIG. 4, electronic device 12 is embodied in the form of a general purpose computing device. The components of electronic device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. These architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus, to name a few.
Electronic device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by electronic device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 28 may include computer system readable media in the form of volatile Memory, such as Random Access Memory (RAM) 30 and/or cache Memory 32. The electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 4, and commonly referred to as a "hard drive"). Although not shown in FIG. 4, a disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a Compact disk Read Only memory (CD-ROM), a Digital versatile disk Read Only memory (DVD-ROM), or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the application.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally perform the functions and/or methodologies of the embodiments described herein.
Electronic device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with electronic device 12, and/or with any devices (e.g., network card, modem, etc.) that enable electronic device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Also, the electronic device 12 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public Network such as the Internet) via the Network adapter 20. As shown, the network adapter 20 communicates with other modules of the electronic device 12 via the bus 18. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with electronic device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 16 executes various functional applications and data processing, for example, implementing the methods mentioned in the foregoing embodiments, by executing programs stored in the system memory 28.
In order to implement the foregoing embodiments, the present application also proposes a non-transitory computer-readable storage medium on which a computer program is stored, which when executed by a processor implements the subject presentation method described in the foregoing embodiments.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (10)

1. A subject presentation method, comprising:
acquiring a role attribute and a role skill attribute corresponding to the main body identification;
generating a plurality of key static frames corresponding to the main body identification according to the role attributes and the role skill attributes;
loading the plurality of key static frames according to an editor, and recording animation examples corresponding to the main body identification;
and generating a resource management script according to the animation example corresponding to the main body identification, and calling the resource management script to display the animation example according to a playing request of a user.
2. The method of claim 1, wherein generating a plurality of key static frames corresponding to a subject identification from the role attributes and role skill attributes comprises:
determining role parameters of a plurality of angles according to the role attributes corresponding to the main body identification, and generating a plurality of corresponding first static frames according to the role parameters of the plurality of angles;
determining a plurality of time point equipment rendering parameters according to the role skill attributes corresponding to the main body identification, and generating a plurality of corresponding second static frames according to the plurality of time point equipment rendering parameters;
determining a plurality of role device fusion parameters corresponding to the trigger conditions according to the role attributes and the role skill attributes corresponding to the main body identification, and generating a plurality of corresponding third static frames according to the plurality of role device fusion parameters corresponding to the trigger conditions.
3. The method of claim 1, wherein generating a resource management script from an animation instance corresponding to the subject identification comprises:
adding a plurality of animation editing strategies at a root node, and adding a corresponding relation between a main body identifier and an animation example identifier which need to be loaded at a child node.
4. The method of claim 3, wherein invoking the asset management script to expose the animation instance in accordance with a user's play request comprises:
receiving a main body preview playing request sent by a user;
and querying a plurality of animation editing strategies added by the root node to call an overall playing strategy, acquiring animation example identifications on child nodes in sequence, and playing the animation examples corresponding to the animation example identifications.
5. The method of claim 3, wherein invoking the asset management script to expose the animation instance in accordance with a user's play request comprises:
receiving a main body subscription playing request which is sent by a user and carries user preference characteristics;
inquiring a plurality of animation editing strategies added by the root node to call a searching and playing strategy, and acquiring a target subject identifier successfully matched with the user preference characteristics;
and acquiring a corresponding target animation example identifier on the child node according to the target main body identifier, and playing a target animation example corresponding to the target animation example identifier.
6. A subject presentation device, comprising:
the acquiring module is used for acquiring the role attribute and the role skill attribute corresponding to the main body identifier;
the generating module is used for generating a plurality of key static frames corresponding to the main body identification according to the role attributes and the role skill attributes;
the recording module is used for loading the plurality of key static frames according to an editor and recording the animation examples corresponding to the main body identification;
the generation module is further used for generating a resource management script according to the animation example corresponding to the main body identification;
and the display module is used for calling the resource management script to display the animation example according to the playing request of the user.
7. The apparatus of claim 6, wherein the generation module is specifically configured to:
determining role parameters of a plurality of angles according to the role attributes corresponding to the main body identification, and generating a plurality of corresponding first static frames according to the role parameters of the plurality of angles;
determining a plurality of time point equipment rendering parameters according to the role skill attributes corresponding to the main body identification, and generating a plurality of corresponding second static frames according to the plurality of time point equipment rendering parameters;
determining a plurality of role device fusion parameters corresponding to the trigger conditions according to the role attributes and the role skill attributes corresponding to the main body identification, and generating a plurality of corresponding third static frames according to the plurality of role device fusion parameters corresponding to the trigger conditions.
8. The apparatus of claim 6, wherein the generation module is specifically configured to:
adding a plurality of animation editing strategies at a root node, and adding a corresponding relation between a main body identifier and an animation example identifier which need to be loaded at a child node.
9. An electronic device comprising a processor and a memory;
wherein the processor executes a program corresponding to the executable program code by reading the executable program code stored in the memory, for implementing the subject presentation method according to any one of claims 1 to 5.
10. A non-transitory computer-readable storage medium on which a computer program is stored, the computer program, when executed by a processor, implementing the subject presentation method according to any one of claims 1 to 5.
CN201911402444.7A 2019-12-30 2019-12-30 Method and device for showing main body Pending CN111179384A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911402444.7A CN111179384A (en) 2019-12-30 2019-12-30 Method and device for showing main body

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911402444.7A CN111179384A (en) 2019-12-30 2019-12-30 Method and device for showing main body

Publications (1)

Publication Number Publication Date
CN111179384A true CN111179384A (en) 2020-05-19

Family

ID=70658477

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911402444.7A Pending CN111179384A (en) 2019-12-30 2019-12-30 Method and device for showing main body

Country Status (1)

Country Link
CN (1) CN111179384A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002230573A (en) * 2001-02-05 2002-08-16 Sharp Corp Device for transferring animation, device for reproducing animation, system for reproducing animation, program for transferring animation, and program for reproducing animation
CN101620739A (en) * 2009-08-13 2010-01-06 腾讯科技(深圳)有限公司 Method and device for generating motion picture
CN101645175A (en) * 2009-08-25 2010-02-10 腾讯科技(深圳)有限公司 Network virtual-role synthetic system and method
CN101983093A (en) * 2008-06-27 2011-03-02 科乐美数码娱乐株式会社 Input device, input method, information recording medium, and program
CN105279780A (en) * 2015-10-22 2016-01-27 苏州仙峰网络科技有限公司 Role selection method and system
CN105427365A (en) * 2015-11-26 2016-03-23 盛趣信息技术(上海)有限公司 Animation implementation method, system and animation updating method
CN105898522A (en) * 2016-05-11 2016-08-24 乐视控股(北京)有限公司 Method, device and system for processing barrage information
CN107154069A (en) * 2017-05-11 2017-09-12 上海微漫网络科技有限公司 A kind of data processing method and system based on virtual role
CN109816758A (en) * 2018-12-21 2019-05-28 武汉西山艺创文化有限公司 A kind of two-dimensional character animation producing method neural network based and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002230573A (en) * 2001-02-05 2002-08-16 Sharp Corp Device for transferring animation, device for reproducing animation, system for reproducing animation, program for transferring animation, and program for reproducing animation
CN101983093A (en) * 2008-06-27 2011-03-02 科乐美数码娱乐株式会社 Input device, input method, information recording medium, and program
CN101620739A (en) * 2009-08-13 2010-01-06 腾讯科技(深圳)有限公司 Method and device for generating motion picture
CN101645175A (en) * 2009-08-25 2010-02-10 腾讯科技(深圳)有限公司 Network virtual-role synthetic system and method
CN105279780A (en) * 2015-10-22 2016-01-27 苏州仙峰网络科技有限公司 Role selection method and system
CN105427365A (en) * 2015-11-26 2016-03-23 盛趣信息技术(上海)有限公司 Animation implementation method, system and animation updating method
CN105898522A (en) * 2016-05-11 2016-08-24 乐视控股(北京)有限公司 Method, device and system for processing barrage information
CN107154069A (en) * 2017-05-11 2017-09-12 上海微漫网络科技有限公司 A kind of data processing method and system based on virtual role
CN109816758A (en) * 2018-12-21 2019-05-28 武汉西山艺创文化有限公司 A kind of two-dimensional character animation producing method neural network based and device

Similar Documents

Publication Publication Date Title
JP2020515124A (en) Method and apparatus for processing multimedia resources
CN111162993B (en) Information fusion method and device
CN111654730B (en) Video playing method, data processing method, related device and medium
CN109710753B (en) Method and device for generating shortcut information based on personalized theme and electronic equipment
CN107168974B (en) Display control method and device for displaying related content of item and message in social application
CN108712478A (en) A kind of method and apparatus for sharing boarding application
CN110677267A (en) Information processing method and device
CN111464430A (en) Dynamic expression display method, dynamic expression creation method and device
CN112148844A (en) Information reply method and device for robot
CN107291564B (en) Information copying and pasting method and device and electronic equipment
CN108288228B (en) Social network information acquisition method and device
US20170264962A1 (en) Method, system and computer program product
CN110113443B (en) Social role management method, computer device and storage medium
CN109857907B (en) Video positioning method and device
CN112187624A (en) Message reply method and device and electronic equipment
KR20210117248A (en) Application program sharing method and apparatus, electronic device, computer readable medium
CN111179384A (en) Method and device for showing main body
CN108416830B (en) Animation display control method, device, equipment and storage medium
CN107229701B (en) Ranking update method, device and computer equipment
CN110286990A (en) User interface presentation method, apparatus, equipment and storage medium
CN112988810B (en) Information searching method, device and equipment
JP6073324B2 (en) Processing method performed by computer
CN113282268B (en) Sound effect configuration method and device, storage medium and electronic equipment
CN113051336A (en) Visualized data operation method, system, device and medium
CN111178936A (en) Advertisement display testing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220810

Address after: Texas, USA

Applicant after: People's happiness Co.,Ltd.

Address before: 100085 East District, Second Floor, 33 Xiaoying West Road, Haidian District, Beijing

Applicant before: BEIJING KINGSOFT INTERNET SECURITY SOFTWARE Co.,Ltd.

TA01 Transfer of patent application right
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200519

WD01 Invention patent application deemed withdrawn after publication