CN111437600A - Plot showing method, plot showing device, plot showing equipment and storage medium - Google Patents

Plot showing method, plot showing device, plot showing equipment and storage medium Download PDF

Info

Publication number
CN111437600A
CN111437600A CN202010213234.XA CN202010213234A CN111437600A CN 111437600 A CN111437600 A CN 111437600A CN 202010213234 A CN202010213234 A CN 202010213234A CN 111437600 A CN111437600 A CN 111437600A
Authority
CN
China
Prior art keywords
data
scenario
virtual character
display
plot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010213234.XA
Other languages
Chinese (zh)
Inventor
何秋豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010213234.XA priority Critical patent/CN111437600A/en
Publication of CN111437600A publication Critical patent/CN111437600A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/45Controlling the progress of the video game
    • A63F13/47Controlling the progress of the video game involving branching, e.g. choosing one of several possible scenarios at a given point in time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites

Abstract

The embodiment of the application discloses a plot showing method, a plot showing device, plot showing equipment and a plot showing storage medium, and belongs to the technical field of computers. The method comprises the following steps: and receiving a display instruction of the first target scenario, responding to the display instruction, acquiring display data of the first target scenario, and dynamically displaying the virtual character according to the animation data in the process of playing the scenario description data so as to enable at least one of the action or the expression of the virtual character to be matched with the scenario description data. In the process of showing the target plot, the virtual roles are not static images which are fixed and unchangeable, but are dynamically displayed, so that the virtual roles are more vivid, the virtual roles can make actions or expressions which accord with the plot description data, and the showing effect of the plot is improved.

Description

Plot showing method, plot showing device, plot showing equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a plot showing method, a plot showing device, plot showing equipment and a storage medium.
Background
With the development of computer technology, in order to increase the interest of game applications, a scenario is often set in the game applications at present, so that users can understand the content of the story line, the function and the like in the game applications through the set scenario, thereby increasing the understanding of the users on the game applications.
In a game application, a virtual character is usually displayed to a user, and scenario description data associated with the virtual character is played to achieve an effect of introducing a scenario. However, the virtual character displayed is only a static image, which is always fixed during the playing process, resulting in poor display effect.
Disclosure of Invention
The embodiment of the application provides a plot showing method, a plot showing device, plot showing equipment and a storage medium, and the plot showing effect is improved. The technical scheme is as follows:
in one aspect, a scenario display method is provided, and the method includes:
receiving a display instruction of a first target plot;
responding to the display instruction, obtaining display data of the first target scenario, wherein the display data at least comprise animation data of the virtual character and scenario description data matched with the animation data;
and in the process of playing the plot description data, dynamically displaying the virtual character according to the animation data so as to enable at least one of the action or the expression of the virtual character to be matched with the plot description data.
Optionally, the dynamically displaying the virtual character according to the second animation data in the process of playing the second scenario description data after playing the first scenario description data so that at least one of an action or an expression of the virtual character matches the second scenario description data includes:
and after the first scenario description data is played, responding to a continuous display instruction, and dynamically displaying the virtual character according to the second animation data in the process of playing the second scenario description data so as to enable at least one of the action or the expression of the virtual character to be matched with the second scenario description data.
Optionally, the presentation data further includes background data, and after the presentation data of the first target scenario is acquired in response to the presentation instruction, the method further includes:
and displaying the background corresponding to the background data according to the background data in the process of playing the plot description data and dynamically displaying the virtual character.
Optionally, the method further comprises:
and in the process of playing the plot description data and dynamically displaying the virtual role, responding to the triggering operation of a skipping button, stopping playing the plot description data and dynamically displaying the virtual role.
Optionally, the method further comprises:
and after playing the scenario description data and dynamically displaying the virtual character, responding to the triggering operation of a review button, and dynamically displaying the virtual character again according to the animation data in the process of re-playing the scenario description data so as to enable at least one of the action or expression of the virtual character to be matched with the scenario description data.
In another aspect, there is provided a plot showing apparatus, the apparatus comprising:
the command receiving module is used for receiving a display command of the first target plot;
the data acquisition module is used for responding to the display instruction and acquiring display data of the first target plot, wherein the display data at least comprises animation data of virtual characters and plot description data matched with the animation data;
and the plot display module is used for dynamically displaying the virtual character according to the animation data in the process of playing the plot description data so as to enable at least one of the action or the expression of the virtual character to be matched with the plot description data.
Optionally, the apparatus further comprises:
the image dividing module is used for dividing the virtual character image of the virtual character into a plurality of part images, and each part image comprises a part of the virtual character;
the animation data determining module is used for determining the motion data of each part image according to the plot description data;
and the animation data acquisition module is used for fusing the motion data of each part image to obtain the animation data.
Optionally, the data obtaining module includes:
the request sending unit is used for responding to the display instruction and sending a display data acquisition request to a server, wherein the display data acquisition request carries the plot identifier of the first target plot, and the server is used for acquiring display data corresponding to the plot identifier;
and the data receiving unit is used for receiving the display data sent by the server.
Optionally, the scenario description data includes text data, and the scenario presentation module is further configured to:
and dynamically displaying the virtual character according to the animation data, and displaying the text data on the upper layer of the virtual character so as to enable at least one of the action or the expression of the virtual character to be matched with the text data.
Optionally, the scenario description data includes voice data, and the scenario presentation module is further configured to:
and in the process of playing the voice data, dynamically displaying the virtual character according to the animation data so as to enable at least one of the action or the expression of the virtual character to be matched with the voice data.
Optionally, the presentation data at least includes first animation data, second animation data, first scenario description data matched with the first animation data, and second scenario description data matched with the second animation data, and the second animation data is next-segment animation data of the first animation data;
the plot display module comprises:
the first display unit is used for dynamically displaying the virtual character according to the first animation data in the process of playing first scenario description data so as to enable at least one of the action or the expression of the virtual character to be matched with the first scenario description data;
and the second display unit is used for dynamically displaying the virtual character according to the second animation data in the process of playing the second scenario description data after the first scenario description data is played so as to enable at least one of the action or the expression of the virtual character to be matched with the second scenario description data.
Optionally, the second display unit is further configured to:
and after the first scenario description data is played, responding to a continuous display instruction, and dynamically displaying the virtual character according to the second animation data in the process of playing the second scenario description data so as to enable at least one of the action or the expression of the virtual character to be matched with the second scenario description data.
Optionally, the apparatus further comprises:
the entrance display module is used for displaying at least two branch scenario entrances, and each branch scenario entrance is associated with one branch scenario occurring after the first target scenario;
and the instruction generating module is used for responding to the selection operation of any branch scenario entrance, determining a second target scenario related to the selected branch scenario entrance, and generating a display instruction of the second target scenario.
Optionally, the presentation data further includes background data, and the apparatus further includes:
and the background display module is used for displaying the background corresponding to the background data according to the background data in the process of playing the plot description data and dynamically displaying the virtual roles.
Optionally, the apparatus further comprises:
and the plot skipping module is used for responding to the triggering operation of the skipping button in the process of playing the plot description data and dynamically displaying the virtual roles, stopping playing the plot description data and stopping dynamically displaying the virtual roles.
Optionally, the apparatus further comprises:
and the plot reviewing module is used for responding to the triggering operation of a reviewing button after the plot description data is played and the virtual role is dynamically displayed, and dynamically displaying the virtual role again according to the animation data in the process of replaying the plot description data so as to enable at least one of the action or the expression of the virtual role to be matched with the plot description data.
In another aspect, a computer device is provided, which includes a processor and a memory, wherein at least one instruction is stored in the memory, and the at least one instruction is loaded and executed by the processor to implement the operations as performed in the plot showing method.
In another aspect, a computer-readable storage medium is provided, in which at least one instruction is stored, the at least one instruction being loaded and executed by a processor to implement operations as performed in the scenario presentation method.
The method, the device, the equipment and the storage medium provided by the embodiment of the application receive a display instruction of a first target plot, respond to the display instruction, obtain display data of the first target plot, and dynamically display the virtual character according to animation data in the process of playing plot description data so as to enable at least one of the action or the expression of the virtual character to be matched with the plot description data. In the process of showing the target plot, the virtual character is not a fixed static image any more, but is dynamically displayed, the dynamic virtual character is more vivid compared with the static virtual character, and at least one of the action or the expression of the virtual character is matched with the plot description data, so that the virtual character can make the action or the expression according with the plot description data, and the showing effect of the plot is improved.
In addition, in the embodiment of the application, the virtual character is a dynamic two-dimensional image, but a display effect similar to that of the dynamic three-dimensional image can be achieved, and compared with the rendering of the three-dimensional image, the rendering speed of the two-dimensional image is higher, and the rendering efficiency of the virtual character is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a scenario display method according to an embodiment of the present application.
Fig. 2 is a flowchart of an animation data generation method according to an embodiment of the present application.
Fig. 3 is a schematic diagram of a display interface provided in an embodiment of the present application.
Fig. 4 is a schematic diagram of another display interface provided in the embodiment of the present application.
Fig. 5 is a flowchart of a scenario display provided in an embodiment of the present application.
Fig. 6 is a schematic diagram of another display interface provided in an embodiment of the present application.
FIG. 7 is a schematic diagram of human-computer interaction provided by an embodiment of the present application.
Fig. 8 is a flowchart of another scenario presentation method provided in an embodiment of the present application.
Fig. 9 is a flowchart of another scenario presentation method provided in an embodiment of the present application.
Fig. 10 is a schematic structural diagram of a scenario display apparatus according to an embodiment of the present application.
Fig. 11 is a schematic structural diagram of another scenario display device provided in an embodiment of the present application.
Fig. 12 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Fig. 13 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present application more clear, the embodiments of the present application will be further described in detail with reference to the accompanying drawings.
It will be understood that the terms "first," "second," and the like as used herein may be used herein to describe various concepts, which are not limited by these terms unless otherwise specified. These terms are only used to distinguish one concept from another. For example, a first target scenario may be referred to as a second target scenario, and a second target scenario may be referred to as a first target scenario, without departing from the scope of the present application.
As used herein, the terms "each," "plurality," and the like, a plurality includes two or more, each referring to each of the corresponding plurality. For example, the plurality of position images includes 20 position images, and each position image refers to each of the 20 position images.
The plot showing method provided by the embodiment of the application can be applied to various scenes:
for example, the method is applied to a scenario display scene in a game application.
The method for displaying the plot comprises the steps that the plot is set in the game application, so that a user can know the content of the plot, the function and the like in the game application through the set plot, the interestingness of the game application is increased, the terminal responds to a display instruction of the plot by adopting the plot displaying method, the display data of the plot is obtained, and the plot is displayed for the user.
Fig. 1 is a flowchart of a scenario display method according to an embodiment of the present application. The execution main body of the embodiment of the application is a terminal, and the terminal can be various types of portable, pocket, handheld and the like, such as a mobile phone, a computer, a tablet computer and the like. Referring to fig. 1, the method includes:
101. the terminal receives a display instruction of the first target scenario.
In the embodiment of the application, the terminal is installed with a target application, the target application has a plot display function, the target application may be a game application or other types of applications, and the description below takes the example of displaying a first target plot in the target application as an example.
The terminal acquires display data of the first target plot after receiving a display instruction of the first target plot, so that the first target plot is displayed to a user, and the target plot comprises a story line and virtual roles related to the story line. The first target scenario is any scenario in the target application, and the display instruction is used for indicating the terminal to display the first target scenario.
In a possible implementation manner, the display instruction carries a plot identifier of the first target plot, and when the terminal receives the display instruction, the first target plot to be displayed is determined according to the plot identifier. The scenario identifier may be a name, a number, or other identifier of the first target scenario.
In a possible implementation manner, a target scenario inlet is arranged in a display interface of the terminal, and when the terminal detects that a user triggers the target scenario inlet, a first target scenario associated with the target scenario inlet is determined, and a display instruction of the first target scenario is generated. The display interface is an interface with a scenario entrance in any target application, and the trigger operation may be a click operation, a slide operation or other operations.
102. And the terminal responds to the display instruction and acquires display data of the first target plot.
In the embodiment of the application, the display data acquired by the terminal at least comprise animation data of the virtual character and scenario description data matched with the animation data. The animation data of the virtual character is used for dynamically displaying the virtual character in the first target plot, and the plot description data is used for describing the plot of the story in the first target plot.
In one possible implementation manner, the scenario description data includes text data, that is, the terminal displays the story line to the user through the text data in the process of displaying the scenario; or, the scenario description data comprises voice data, namely, the terminal plays the story line to the user through the voice data in the process of showing the scenario; or the scenario description data comprises text data and voice data, the text data is matched with the voice data, namely, the terminal describes the story line to the user through the text data and the voice matched with the text data in the process of showing the scenario.
In one possible implementation, the presentation data further comprises background data for presenting a background of the first target scenario. The background may be a static image or a dynamic image, and the background in one target scenario may include one or more images.
In one possible implementation, the steps shown in FIG. 2 may be used to obtain animation data for a virtual character:
1021. the terminal divides a virtual character image of a virtual character into a plurality of part images. Wherein each part image includes a part of the virtual character.
When the virtual character image is divided, the parts of the virtual character may be divided as needed, and for example, if the virtual character is simply movable, the parts of the virtual character such as the arm, leg, and head may be divided to obtain a part image corresponding to each part. Or if the virtual character is required to have rich expressions, the positions of eyebrows, eyes, mouth, nose and the like of the virtual character can be divided to obtain the position image corresponding to each position.
In addition, the plurality of part images may be obtained by dividing one or more avatar images.
1022. And the terminal determines the motion data of each part image according to the scenario description data.
The action or expression made by the virtual character requires cooperation between a plurality of parts, and, for the first target scenario whose story line is different at each time point, the virtual character is required to make a corresponding action or expression at each time point, that is, each part of the virtual character is required to make a corresponding action. Therefore, the motion data includes parameters such as rotation parameters, moving distance, and motion time point. The rotation parameter represents the rotation angle of the part, the movement distance represents the movement distance of the part, the rotation parameter and the movement distance are combined to determine the action executed by the part, and the movement time point refers to the time point when the part executes the corresponding action.
1023. And the terminal fuses the motion data of each part image to obtain animation data.
The fusion of the motion data of each part image means that the position of each part image in the virtual character image is determined, and a plurality of part images are fused into a complete virtual character image which is a two-dimensional image.
And the terminal fuses the plurality of part images according to the movement time point corresponding to each part image to form a virtual character image, and the plurality of part images execute corresponding actions according to the rotation parameters and the movement distance at each movement time point. The continuous actions corresponding to the plurality of continuous time points are connected, so that the continuous actions can be performed, and the effect of continuously playing the plurality of virtual character images corresponding to the virtual images, namely animation data forming the virtual characters, is obtained. Moreover, each part image can be simply moved and has some deformation, so that the dynamic display effect of the virtual character is better.
Optionally, after the terminal obtains the animation data, the virtual character corresponding to the animation data may be displayed, the technician previews the displayed virtual character, and if the display effect is not good, the motion data may be reset.
Alternatively, the terminal may employ L ive2D (a drawing rendering technique) to obtain animation data, L ive2D is a two-dimensional image that is modeled by a series of sequential images and characters to generate a similar three-dimensional model.
In addition, the virtual character image in the application is a two-dimensional image, and the obtained animation data is animation data corresponding to the two-dimensional image, so that when the virtual character is dynamically displayed based on the animation data, the two-dimensional image is rendered and displayed, compared with the rendering of a three-dimensional image, the rendering efficiency is improved, and meanwhile, the display effect similar to that of the three-dimensional image can be realized.
In a possible implementation manner, the terminal responds to the display instruction and sends a display data acquisition request to the server, the display data acquisition request carries a plot identifier of the first target plot, the server acquires display data corresponding to the plot identifier and sends the display data corresponding to the plot identifier to the terminal, and the terminal receives the display data sent by the server. The server stores plot identifications and corresponding display data of plots in the target application.
Optionally, the display data is in a JSON (JavaScript Object Notation) format, the JSON format is a lightweight data exchange format, and the data in the JSON format is easy to write, can be quickly analyzed, is convenient to transmit, and can effectively improve the transmission efficiency of the data.
In another possible implementation manner, when the terminal installs the target application, the installation package of the target application includes the display data of each target scenario, the terminal responds to the display instruction, obtains the first display data of the first target scenario in the installation package according to the display instruction, and sends a display data obtaining request to the server, where the display data obtaining request carries the first display data, the server converts the first display data into second display data with different formats, and sends the second display data to the terminal, and the terminal receives the second display data sent by the server. The data format of the display data in the installation package is a data format convenient to store, but the terminal cannot directly analyze the display data in the data format, and the format conversion mode is adopted, so that the format of the display data is converted into the data format convenient to analyze by the terminal through the server, and the terminal can quickly analyze the display data.
103. And in the process of playing the plot description data by the terminal, dynamically displaying the virtual character according to the animation data so as to enable at least one of the action or the expression of the virtual character to be matched with the plot description data.
In the embodiment of the application, after the terminal acquires the display data of the first target scenario, the scenario description data can be played according to the display data, and the virtual character is dynamically displayed according to the animation data, so that at least one of the action or the expression of the virtual character is matched with the scenario description data. The actions of the virtual character comprise the actions of raising hands, walking, turning around and the like of the virtual character, and the expression of the virtual character comprises the expressions of closing eyes, smiling and the like.
The matching of at least one of the action or the expression of the virtual character with the scenario description data means that when the scenario description data describes a segment of a story, the virtual character can make at least one of the action or the expression matching with the story line. For example, if the scenario description data is "bye," the action of the virtual character may be a "hand waving action" that matches "bye.
In one possible implementation, when the scenario description data includes text data, the terminal dynamically displays the virtual character according to the animation data, and displays the text data on an upper layer of the virtual character so that at least one of an action or an expression of the virtual character matches the text data. The text data can be displayed at any position on the upper layer of the virtual character.
Alternatively, the text data may include a plurality of continuous segments, and each segment of the text data is displayed in sequence when the text data is displayed on the upper layer of the virtual character, and at this time, the plurality of segments of text are respectively matched with at least one of the actions or expressions of the virtual character.
Optionally, when the text data is displayed on the upper layer of the virtual character, a section of text data may be directly displayed, or each character in a section of text data may be sequentially displayed according to a preset sequence, so as to achieve the effect of displaying the text data by the ticker. However, in any display mode, the display time length from the first character to the last character in a piece of text data needs to be consistent with the display time length of the matched virtual character.
Optionally, the presentation data includes a display style of the text data, and the terminal displays the text data on an upper layer of the virtual character according to the display style of the text data. For example, referring to the display interface 301 shown in fig. 3, a text box 302 is provided, in which the name of a virtual character "character 1" is displayed in the upper left corner of the text box, and text data "little main, or casualty" is displayed in the text box.
In another possible implementation manner, the scenario description data includes voice data, and the terminal dynamically displays the virtual character according to the animation data during playing the voice data, so that at least one of the motion or expression of the virtual character is matched with the voice data. Therefore, the effect of speaking the voice data by the virtual character is simulated, the virtual character is more vivid, and the sense of reality of the virtual character is increased.
In another possible implementation manner, the scenario description data includes text data and voice data, the terminal dynamically displays the virtual character according to the animation data during playing the voice data, and displays the text data on the upper layer of the virtual character, so that at least one of the motion or expression of the virtual character is matched with the text data and the voice data. The text data is matched with the voice data, namely, the text data matched with the voice data is displayed in the process of playing the voice data.
In another possible implementation manner, the first target scenario is divided into a plurality of sections of target scenarios, the animation data in the display data is divided into a plurality of continuous sections, the scenario description data is divided into a plurality of continuous sections, and each section of target scenario has a corresponding section of animation data and a corresponding section of scenario description data. Next, description will be given taking an example in which the first target scenario is divided into two sections, the animation data is divided into first animation data and second animation data, and the scenario description data is divided into first scenario description data and second scenario description data.
The display data comprises first animation data, second animation data, first plot description data matched with the first animation data and second plot description data matched with the second animation data, and the second animation data is next section of animation data of the first animation data. In the process that the terminal plays the first scenario description data, the virtual character is dynamically displayed according to the first animation data so that at least one of the action or the expression of the virtual character is matched with the first scenario description data; and after the first scenario description data is played, dynamically displaying the virtual character according to the second animation data in the process of playing the second scenario description data so as to enable at least one of the action or the expression of the virtual character to be matched with the second scenario description data.
After the terminal displays the first section of target scenario, the user does not need to perform any operation, the terminal can display the second section of target scenario, and the terminal can automatically play the first section of target scenario.
In one possible implementation manner, after the terminal plays the first scenario description data, in response to the continued presentation instruction, in the process of playing the second scenario description data, the terminal dynamically displays the virtual character according to the second animation data, so that at least one of the motion or expression of the virtual character is matched with the second scenario description data. That is to say, after the terminal plays the first scenario description data, the terminal continues to display the scenario when receiving the continue display instruction.
Optionally, the terminal responds to the trigger operation of the user on any position in the display interface to generate a continuous display instruction; or when the terminal plays the first scenario description data, displaying a display button corresponding to the first scenario description data, generating a continuous display instruction by the terminal in response to the triggering operation of the user on the display button, and if the user triggers other positions except the display button in the display interface, not generating the continuous display instruction.
It should be noted that, in the embodiment of the present application, only the first animation data, the second animation data, the first scenario description data, and the second scenario description data are taken as examples for illustration, and in another embodiment, data such as the third animation data and the third scenario description data may also be included.
In one possible implementation manner, the presentation data further includes background data, and after the presentation data of the first target scenario is acquired in response to the presentation instruction, the method further includes: and displaying the background corresponding to the background data according to the background data in the process of playing the plot description data and dynamically displaying the virtual roles.
Alternatively, the virtual character may be displayed on an upper layer of the background, and if the scenario description data is text data, the text data may be displayed on the upper layer of the background.
In a possible implementation manner, when the display data includes text data, animation data, and background data, the terminal may display the corresponding background according to the background data, display the text data on the upper layer of the background according to the text data and the animation data, and dynamically display the virtual character. The text data and the virtual character can be displayed simultaneously, the text data can be displayed first and then the virtual character can be displayed, or the virtual character can be displayed first and then the text data can be displayed. However, if the text data and the virtual character are displayed in succession, the display time interval between the text data and the virtual character should be small.
In a possible implementation manner, in the process of playing the scenario description data by the terminal, after the virtual character is dynamically displayed according to the animation data, at least two branch scenario inlets are displayed, and each branch scenario inlet is associated with a branch scenario generated after the first target scenario; and in response to the selection operation of any branch scenario entrance, determining a second target scenario associated with the selected branch scenario entrance, and generating a display instruction of the second target scenario. After the display instruction of the second target scenario is generated, the operation performed by the terminal is similar to the operation in steps 101 to 103, which is not described herein again.
For example, referring to a display interface 401 shown in fig. 4, the display interface 401 displays a last text data 402 in a first target scenario, a first branch scenario entrance 403 and a second branch scenario entrance 404, and a user may select one of the branch scenario entrances to display a second target scenario associated with the branch scenario entrance. For example, the last piece of text data 402 is "please select a forward direction", the first branching scenario entry 403 is an entry of "go left", and the second branching scenario entry 404 is an entry of "go right".
Optionally, referring to the flowchart shown in fig. 5, the terminal displays a main scenario 501, after the main scenario 501 is displayed, a first branch scenario inlet corresponding to the first branch scenario 502 and a second branch scenario inlet corresponding to the second branch scenario 503 are displayed, the user may select any branch scenario inlet, the terminal displays a branch scenario corresponding to the branch scenario inlet selected by the user, and after the branch scenario is displayed, the main scenario 504 is displayed. The main scenario 501 and the main scenario 504 belong to two scenarios in the same main scenario.
In one possible implementation, in the process of playing the scenario description data and dynamically displaying the virtual character, the scenario description data is stopped from being played and the virtual character is stopped from being dynamically displayed in response to the triggering operation of the skip button. By setting the skipping button, if the user is not interested in the target plot, the user can trigger the skipping button to enable the terminal to skip the target plot, and flexibility of plot display is improved.
In one possible implementation manner, after the scenario description data is played and the virtual character is dynamically displayed, in response to the triggering operation of the review button, during the process of playing the scenario description data again, the virtual character is dynamically displayed again according to the animation data, so that at least one of the motion or expression of the virtual character is matched with the scenario description data. By setting the review button, if the user is interested in the target plot or the user skips the target plot before, the user can trigger the review button to enable the terminal to redisplay the target plot, so that the flexibility of plot display is further improved.
Optionally, referring to a display interface 601 shown in fig. 6, where the display interface includes an "auto button", "skip button", and a "review button", the terminal responds to a user's trigger operation on any one of the "auto button", "skip button", and "review button" to execute a function corresponding to the trigger button.
In the embodiment of the application, the plot display is realized through the data transmission between the terminal and the server through the man-machine interaction between the user and the terminal. Referring to the schematic diagram shown in fig. 7, a user 701 triggers a target scenario in a display interface of a terminal 702, the terminal 702 obtains display data of the target scenario from a server 703, the terminal 702 displays the target scenario according to the display data, and the user 701 views the target scenario displayed by the terminal 702.
In addition, in the embodiment of the application, the display interface for displaying the scenario is a web (world wide web) interface, and the scenario can be displayed by any terminal supporting the web interface, so that the scenario can be displayed in various terminals such as a mobile phone, a computer and a tablet personal computer, and the cross-terminal display of the scenario is realized.
And, a pixi.js framework (a rendering engine) and Canvas technology (a rendering technology) can be adopted to render the virtual character. The pixi.js frame is a rendering engine for the two-dimensional image, the two-dimensional image can be quickly rendered, and the rendering efficiency is improved; canvas technology (a rendering technology) is a rendering technology that is self-contained by web interfaces.
It should be noted that, in the embodiment of the present application, only one virtual character in the target scenario is taken as an example for description, in another embodiment, the target scenario may include a plurality of virtual characters, the presentation data includes animation data of the plurality of virtual characters and scenario description data corresponding to the plurality of virtual characters, and the embodiment of the present application does not limit the number of virtual characters included in the target scenario.
The plot display method provided by the embodiment of the application receives a display instruction of a first target plot, responds to the display instruction, obtains display data of the first target plot, and dynamically displays the virtual character according to animation data in the process of playing the plot description data, so that at least one of the action or expression of the virtual character is matched with the plot description data. In the process of showing the target plot, the virtual character is not a fixed static image any more, but is dynamically displayed, the dynamic virtual character is more vivid compared with the static virtual character, and at least one of the action or the expression of the virtual character is matched with the plot description data, so that the virtual character can make the action or the expression according with the plot description data, and the showing effect of the plot is improved.
In addition, in the embodiment of the application, the virtual character is a dynamic two-dimensional image, but a display effect similar to that of the dynamic three-dimensional image can be achieved, and compared with the rendering of the three-dimensional image, the rendering speed of the two-dimensional image is higher, and the rendering efficiency of the virtual character is improved. The dynamic display method of the virtual character in the embodiment of the application is matched with certain angle rotation stretching, the display effect of 90% of the three-dimensional image can be achieved, but the actual rendering performance is improved by 60% compared with the rendering of the three-dimensional image.
In addition, the plot showing method provided by the embodiment of the application can be applied to various operating systems, so that each operating system does not need to be modeled respectively. For example, the same animation data may be applied to an android system and an IOS system (a kind of operating system).
Fig. 8 is a flowchart of another scenario showing method provided in an embodiment of the present application, and referring to fig. 8, the method includes:
801. the user opens the game application installed in the terminal.
802. The terminal responds to the triggering operation of a user on a scenario entrance in the game application and obtains first display data of a target scenario.
803. The terminal sends a display data acquisition request to the server, wherein the display data acquisition request carries first display data.
804. The server caches the first display data according to the received display data acquisition request, acquires second display data with a format different from that of the first display data, and sends the second display data to the terminal.
805. And the terminal receives and loads the second display data.
806. And rendering the loaded display data and displaying the plot by the terminal. The embodiment of the scenario display in step 806 is similar to the embodiment shown in fig. 1, and is not described herein again.
807. And the terminal detects the trigger operation of the user on the display interface and carries out subsequent processing according to the trigger operation.
The above steps 801, 802 and 807 realize the interaction between the user and the terminal, and the above steps 803 to 805 realize the data transmission between the terminal and the server.
Fig. 9 is a flowchart of another scenario showing method provided in an embodiment of the present application, and referring to fig. 9, the method includes:
901. and responding to the display instruction of the target plot, and acquiring display data of the target plot. The target plot comprises a main plot and a branch plot, the target plot comprises a plurality of sections of plots, each section of plot has corresponding display data, and the display data comprises background data, plot description data and animation data.
902. It is determined whether the master scenario is presented, if so, step 903 is executed, and if not, step 909 is executed.
903. Judging whether the display data comprises background data or not, if so, generating a background instruction, and executing a step 904, wherein the background instruction is used for indicating the terminal to display the background; if not, step 905 is performed.
904. The background is loaded and displayed.
905. Judging whether the exhibition data comprises plot description data or not, if so, generating a plot description instruction, and executing a step 906, wherein the plot description instruction is used for indicating a terminal to play the plot description data; if not, step 907 is performed.
906. And playing the scenario description data.
907. Judging whether the display data comprises animation data of the virtual character, if so, generating an animation instruction, and executing a step 908, wherein the animation instruction is used for indicating a terminal to dynamically display the memorability of the virtual character; if not, step 912 is performed.
908. And dynamically displaying the virtual character.
909. Judging whether the selection is an option step, if so, generating an option instruction, and executing the step 911, wherein the option instruction is used for indicating the terminal to display at least two branch storyline entries; if not, step 910 is performed.
910. And displaying the branch scenario, and executing step 903. The display mode of the branch scenario is similar to that of the main scenario.
911. At least two branching scenario portals are displayed.
912. And detecting the trigger operation of the user on the display interface.
913. And judging whether the plot is the last plot in the target plot, if so, executing the step 914, and if not, executing the step 902 again.
914. And stopping displaying the target plot.
The specific implementation of the scenario display in the embodiment shown in fig. 9 is similar to the implementation shown in fig. 1, and is not repeated herein.
Fig. 10 is a schematic structural diagram of a scenario display apparatus according to an embodiment of the present application. Referring to fig. 10, the apparatus includes:
an instruction receiving module 1001, configured to receive a display instruction of a first target scenario;
the data acquisition module 1002 is configured to acquire display data of a first target scenario in response to a display instruction, where the display data at least includes animation data of a virtual character and scenario description data matched with the animation data;
the scenario presentation module 1003 is configured to dynamically display the virtual character according to the animation data in the process of playing the scenario description data, so that at least one of an action or an expression of the virtual character is matched with the scenario description data.
The device provided by the embodiment of the application receives the display instruction of the first target scenario, responds to the display instruction, obtains the display data of the first target scenario, and dynamically displays the virtual character according to the animation data in the process of playing the scenario description data so that at least one of the action or the expression of the virtual character is matched with the scenario description data. In the process of showing the target plot, the virtual character is not a fixed static image any more, but is dynamically displayed, the dynamic virtual character is more vivid compared with the static virtual character, and at least one of the action or the expression of the virtual character is matched with the plot description data, so that the virtual character can make the action or the expression according with the plot description data, and the showing effect of the plot is improved.
Optionally, referring to fig. 11, the apparatus further comprises:
an image dividing module 1004 for dividing the virtual character image of the virtual character into a plurality of part images, each part image including a part of the virtual character;
an animation data determining module 1005 for determining motion data of each of the part images according to the scenario description data;
and the animation data acquisition module 1006 is configured to fuse the motion data of each of the position images to obtain animation data.
Optionally, referring to fig. 11, the data obtaining module 1002 includes:
a request sending unit 1012, configured to send, in response to the display instruction, a display data acquisition request to the server, where the display data acquisition request carries a scenario identifier of the first target scenario, and the server is configured to obtain display data corresponding to the scenario identifier;
the data receiving unit 1022 is configured to receive the display data sent by the server.
Optionally, referring to fig. 11, the scenario description data includes text data, and the scenario presentation module 1003 is further configured to:
and dynamically displaying the virtual character according to the animation data, and displaying text data on the upper layer of the virtual character so that at least one of the motion or expression of the virtual character is matched with the text data.
Optionally, referring to fig. 11, the scenario description data includes voice data, and the scenario presentation module 1003 is further configured to:
and in the process of playing the voice data, dynamically displaying the virtual character according to the animation data so as to enable at least one of the action or the expression of the virtual character to be matched with the voice data.
Optionally, referring to fig. 11, the presentation data at least includes first animation data, second animation data, first scenario description data matching the first animation data, and second scenario description data matching the second animation data, the second animation data being next animation data of the first animation data;
the scenario display module 1003 includes:
the first display unit 1013 is configured to dynamically display the virtual character according to the first animation data in a process of playing the first scenario description data, so that at least one of an action or an expression of the virtual character matches the first scenario description data;
the second display unit 1023 is configured to dynamically display the virtual character according to the second animation data in the process of playing the second scenario description data after the first scenario description data is played, so that at least one of the motion or the expression of the virtual character is matched with the second scenario description data.
Optionally, referring to fig. 11, the second display unit 1023 is further configured to:
and after the first scenario description data is played, responding to a continuous display instruction, and dynamically displaying the virtual character according to the second animation data in the process of playing the second scenario description data so as to enable at least one of the action or the expression of the virtual character to be matched with the second scenario description data.
Optionally, referring to fig. 11, the apparatus further comprises:
an entry exhibition module 1007 configured to exhibit at least two branching scenario entries, each branching scenario entry being associated with a branching scenario occurring after the first target scenario;
and the instruction generating module 1008 is used for responding to the selection operation of any branch scenario entrance, determining a second target scenario associated with the selected branch scenario entrance, and generating a display instruction of the second target scenario.
Optionally, referring to fig. 11, the presentation data further includes background data, and the apparatus further includes:
the background display module 1009 is configured to display a background corresponding to the background data according to the background data in the process of playing the scenario description data and dynamically displaying the virtual character.
Optionally, referring to fig. 11, the apparatus further comprises:
and the scenario skipping module 1010 is used for responding to the triggering operation of the skipping button in the process of playing the scenario description data and dynamically displaying the virtual character, stopping playing the scenario description data and stopping dynamically displaying the virtual character.
Optionally, referring to fig. 11, the apparatus further comprises:
and the scenario review module 1011 is configured to, after playing the scenario description data and dynamically displaying the virtual character, respond to a trigger operation on a review button, and dynamically display the virtual character again according to the animation data in the process of replaying the scenario description data, so that at least one of an action or an expression of the virtual character matches the scenario description data.
It should be noted that: in the scenario display apparatus provided in the above embodiment, when the scenario is displayed, only the division of the functional modules is exemplified, and in practical applications, the function distribution may be completed by different functional modules according to needs, that is, the internal structure of the terminal is divided into different functional modules to complete all or part of the functions described above. In addition, the scenario display apparatus and the scenario display method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.
Fig. 12 shows a schematic structural diagram of a terminal 1200 according to an exemplary embodiment of the present application.
In general, terminal 1200 includes: a processor 1201 and a memory 1202.
The processor 1201 may include one or more Processing cores, such as a 4-core processor, an 8-core processor, etc., the processor 1201 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), a P L a (Programmable logic Array), the processor 1201 may also include a main processor, which is a processor for Processing data in a wake-up state, also referred to as a CPU (Central Processing Unit), and a coprocessor, which is a low-power processor for Processing data in a standby state, the processor 1201 may, in some embodiments, be integrated with a GPU (Graphics Processing Unit) for rendering and rendering content to be displayed on the display screen, the processor 1201 may further include an intelligent processor, AI (AI) for learning operations related to the display.
Memory 1202 may include one or more computer-readable storage media, which may be non-transitory. Memory 1202 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1202 is to store at least one instruction for the processor 1201 to have to implement a scenario presentation method provided by a method embodiment of the present application.
In some embodiments, the terminal 1200 may further optionally include: a peripheral interface 1203 and at least one peripheral. The processor 1201, memory 1202, and peripheral interface 1203 may be connected by a bus or signal line. Various peripheral devices may be connected to peripheral interface 1203 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1204, touch display 1205, camera 1206, audio circuitry 1207, pointing component 1208, and power source 1209.
The peripheral interface 1203 may be used to connect at least one peripheral associated with I/O (Input/Output) to the processor 1201 and the memory 1202. In some embodiments, the processor 1201, memory 1202, and peripheral interface 1203 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1201, the memory 1202 and the peripheral device interface 1203 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 1204 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 1204 communicates with a communication network and other communication devices by electromagnetic signals. The radio frequency circuit 1204 converts an electric signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electric signal. Optionally, the radio frequency circuit 1204 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1204 may communicate with other terminals through at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1204 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The Display 1205 is for displaying a UI (user interface) that may include graphics, text, icons, video, and any combination thereof, when the Display 1205 is a touch Display, the Display 1205 also has the capability to capture touch signals on or over a surface of the Display 1205, which touch signals may be input to the processor 1201 for processing as control signals, at which time the Display 1205 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard, in some embodiments the Display 1205 may be one, providing the front panel of the terminal 1200, in other embodiments the Display 1205 may be at least two, each provided on a different surface of the terminal 1200 or in a folded design, in still other embodiments the Display 1205 may be a flexible Display, provided on a curved surface or folded surface of the terminal 1200, even further, the Display 1205 may be provided in a non-rectangular irregular pattern, shaped screen, the Display 1205 may be fabricated using L CD (L id Crystal, Display), liquid Crystal Display (emissive Diode) L, Organic light Emitting Diode L, or the like.
Camera assembly 1206 is used to capture images or video. Optionally, camera assembly 1206 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal 1200, and a rear camera is disposed at a rear surface of the terminal 1200. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1206 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1207 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals into the processor 1201 for processing or inputting the electric signals into the radio frequency circuit 1204 to achieve voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided at different locations of terminal 1200. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1201 or the radio frequency circuit 1204 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1207 may also include a headphone jack.
The positioning component 1208 is used to locate the current geographic location of the terminal 1200 to implement navigation or L BS (L geographic based Service.) the positioning component 1208 may be a positioning component based on the united states GPS (global positioning System), the beidou System of china, the greiner System of russia, or the galileo System of the european union.
The power supply 1209 is used to provide power to various components within the terminal 1200. The power source 1209 may be alternating current, direct current, disposable or rechargeable. When the power source 1209 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1200 also includes one or more sensors 1210. The one or more sensors 1210 include, but are not limited to: acceleration sensor 1211, gyro sensor 1212, pressure sensor 1213, fingerprint sensor 1214, optical sensor 1215, and proximity sensor 1216.
The acceleration sensor 1211 can detect magnitudes of accelerations on three coordinate axes of the coordinate system established with the terminal 1200. For example, the acceleration sensor 1211 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1201 may control the touch display 1205 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1211. The acceleration sensor 1211 may also be used for acquisition of motion data of an application or a user.
The gyro sensor 1212 may detect a body direction and a rotation angle of the terminal 1200, and the gyro sensor 1212 may collect a 3D motion of the user on the terminal 1200 in cooperation with the acceleration sensor 1211. The processor 1201 can implement the following functions according to the data collected by the gyro sensor 1212: motion sensing (such as changing the UI according to a tilt operation of the user), image stabilization at the time of photographing, application control, and inertial navigation.
Pressure sensors 1213 may be disposed on a side bezel of terminal 1200 and/or an underlying layer of touch display 1205. When the pressure sensor 1213 is disposed on the side frame of the terminal 1200, the user's holding signal of the terminal 1200 can be detected, and the processor 1201 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1213. When the pressure sensor 1213 is disposed at a lower layer of the touch display screen 1205, the processor 1201 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 1205. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1214 is used for collecting fingerprints of a user, the identity of the user is identified by the processor 1201 according to the fingerprints collected by the fingerprint sensor 1414, or the identity of the user is identified by the fingerprint sensor 1214 according to the collected fingerprints, when the identity of the user is identified as a credible identity, the user is authorized to have related sensitive operations by the processor 1201, the sensitive operations comprise screen unlocking, encrypted information viewing, software downloading, payment and setting change, and the like, the fingerprint sensor 1214 can be arranged on the front side, the back side or the side of the terminal 1200, and when a physical key or a manufacturer L ogo is arranged on the terminal 1200, the fingerprint sensor 1214 can be integrated with the physical key or a manufacturer mark.
The optical sensor 1215 is used to collect the ambient light intensity. In one embodiment, the processor 1201 may control the display brightness of the touch display 1205 according to the ambient light intensity collected by the optical sensor 1215. Specifically, when the ambient light intensity is high, the display brightness of the touch display panel 1205 is increased; when the ambient light intensity is low, the display brightness of the touch display panel 1205 is turned down. In another embodiment, processor 1201 may also dynamically adjust the camera head 1206 shooting parameters based on the ambient light intensity collected by optical sensor 1215.
A proximity sensor 1216, also known as a distance sensor, is typically disposed on the front panel of the terminal 1200. The proximity sensor 1216 is used to collect a distance between the user and the front surface of the terminal 1200. In one embodiment, when the proximity sensor 1216 detects that the distance between the user and the front surface of the terminal 1200 gradually decreases, the processor 1201 controls the touch display 1205 to switch from the bright screen state to the dark screen state; when the proximity sensor 1216 detects that the distance between the user and the front surface of the terminal 1200 gradually becomes larger, the processor 1201 controls the touch display 1205 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 12 is not intended to be limiting of terminal 1200 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
Fig. 13 is a schematic structural diagram of a server 1300 according to an embodiment of the present application, where the server 1300 may generate a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 1301 and one or more memories 1302, where the memory 1302 stores at least one instruction, and the at least one instruction is loaded and executed by the processor 1301 to implement the methods provided by the foregoing method embodiments. Of course, the server may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server may also include other components for implementing the functions of the device, which are not described herein again.
The server 1300 may be configured to perform the steps performed by the server in the scenario presentation method.
The embodiment of the present application further provides a computer device for showing a scenario, where the computer device includes a processor and a memory, where the memory stores at least one instruction, and the at least one instruction is loaded and executed by the processor to implement the operations performed in the scenario showing method of the foregoing embodiment.
The embodiment of the present application further provides a computer-readable storage medium, where at least one instruction is stored in the computer-readable storage medium, and the at least one instruction is loaded and executed by a processor to implement the operations performed in the scenario presentation method of the foregoing embodiment.
The embodiment of the present application further provides a computer program, where at least one instruction is stored in the computer program, and the at least one instruction is loaded and executed by a processor to implement the operations executed in the scenario display method of the foregoing embodiment.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only an alternative embodiment of the present application and is not intended to limit the present application, and any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. A plot presentation method, the method comprising:
receiving a display instruction of a first target plot;
responding to the display instruction, obtaining display data of the first target scenario, wherein the display data at least comprise animation data of the virtual character and scenario description data matched with the animation data;
and in the process of playing the plot description data, dynamically displaying the virtual character according to the animation data so as to enable at least one of the action or the expression of the virtual character to be matched with the plot description data.
2. The method of claim 1, wherein prior to receiving the presentation instruction of the first target scenario, the method further comprises:
dividing a virtual character image of the virtual character into a plurality of part images, wherein each part image comprises a part of the virtual character;
determining the motion data of each part image according to the plot description data;
and fusing the motion data of each part image to obtain the animation data.
3. The method of claim 1, wherein said obtaining presentation data for the first target scenario in response to the presentation instruction comprises:
responding to the display instruction, sending a display data acquisition request to a server, wherein the display data acquisition request carries a plot identifier of the first target plot, and the server is used for acquiring display data corresponding to the plot identifier;
and receiving the display data sent by the server.
4. The method of claim 1, wherein the scenario description data comprises text data, and the dynamic display of the virtual character according to the animation data during the playing of the scenario description data comprises:
and dynamically displaying the virtual character according to the animation data, and displaying the text data on the upper layer of the virtual character so as to enable at least one of the action or the expression of the virtual character to be matched with the text data.
5. The method of claim 1, wherein the scenario description data comprises voice data, and the dynamic display of the virtual character according to the animation data during the playing of the scenario description data comprises:
and in the process of playing the voice data, dynamically displaying the virtual character according to the animation data so as to enable at least one of the action or the expression of the virtual character to be matched with the voice data.
6. The method according to claim 1, wherein the presentation data includes at least first animation data, second animation data, first scenario description data matching the first animation data, and second scenario description data matching the second animation data, the second animation data being next-segment animation data of the first animation data;
in the process of playing the scenario description data, dynamically displaying the virtual character according to the animation data so as to enable at least one of the action or the expression of the virtual character to be matched with the scenario description data, including:
in the process of playing first scenario description data, dynamically displaying the virtual character according to the first animation data so as to enable at least one of the action or the expression of the virtual character to be matched with the first scenario description data;
and after the first scenario description data is played, dynamically displaying the virtual character according to the second animation data in the process of playing second scenario description data so as to enable at least one of the action or the expression of the virtual character to be matched with the second scenario description data.
7. The method according to claim 1, wherein after the virtual character is dynamically displayed according to the animation data during the playing of the scenario description data, the method further comprises:
displaying at least two branching scenario entrances, wherein each branching scenario entrance is associated with a branching scenario occurring after the first target scenario;
and in response to the selection operation of any branch scenario entrance, determining a second target scenario associated with the selected branch scenario entrance, and generating a display instruction of the second target scenario.
8. A storyline presentation device, the device comprising:
the command receiving module is used for receiving a display command of the first target plot;
the data acquisition module is used for responding to the display instruction and acquiring display data of the first target plot, wherein the display data at least comprises animation data of virtual characters and plot description data matched with the animation data;
and the plot display module is used for dynamically displaying the virtual character according to the animation data in the process of playing the plot description data so as to enable at least one of the action or the expression of the virtual character to be matched with the plot description data.
9. A computer device, comprising a processor and a memory, wherein at least one instruction is stored in the memory, and wherein the at least one instruction is loaded and executed by the processor to perform the operations recited in the scenario presentation method of any one of claims 1 to 7.
10. A computer-readable storage medium having stored therein at least one instruction, which is loaded and executed by a processor, to perform operations performed in the scenario presentation method of any one of claims 1 to 7.
CN202010213234.XA 2020-03-24 2020-03-24 Plot showing method, plot showing device, plot showing equipment and storage medium Pending CN111437600A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010213234.XA CN111437600A (en) 2020-03-24 2020-03-24 Plot showing method, plot showing device, plot showing equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010213234.XA CN111437600A (en) 2020-03-24 2020-03-24 Plot showing method, plot showing device, plot showing equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111437600A true CN111437600A (en) 2020-07-24

Family

ID=71629520

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010213234.XA Pending CN111437600A (en) 2020-03-24 2020-03-24 Plot showing method, plot showing device, plot showing equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111437600A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113521758A (en) * 2021-08-04 2021-10-22 北京字跳网络技术有限公司 Information interaction method and device, electronic equipment and storage medium
CN114979743A (en) * 2021-02-25 2022-08-30 腾讯科技(深圳)有限公司 Method, device, equipment and medium for displaying audiovisual works
WO2023016176A1 (en) * 2021-08-11 2023-02-16 北京字跳网络技术有限公司 Plot animation playing method, plot animation generation method and apparatus, and terminal and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11137849A (en) * 1997-11-07 1999-05-25 Daiichikosho Co Ltd Computer game device
CN101689278A (en) * 2007-06-21 2010-03-31 微软公司 Responsive cutscenes in video games
CN102160082A (en) * 2008-07-22 2011-08-17 索尼在线娱乐有限公司 System and method for providing persistent character personalities in a simulation
CN104574469A (en) * 2014-12-22 2015-04-29 北京像素软件科技股份有限公司 Plot cartoon implementation method and device
CN106512401A (en) * 2016-10-21 2017-03-22 苏州天平先进数字科技有限公司 User interaction system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11137849A (en) * 1997-11-07 1999-05-25 Daiichikosho Co Ltd Computer game device
CN101689278A (en) * 2007-06-21 2010-03-31 微软公司 Responsive cutscenes in video games
CN102160082A (en) * 2008-07-22 2011-08-17 索尼在线娱乐有限公司 System and method for providing persistent character personalities in a simulation
CN104574469A (en) * 2014-12-22 2015-04-29 北京像素软件科技股份有限公司 Plot cartoon implementation method and device
CN106512401A (en) * 2016-10-21 2017-03-22 苏州天平先进数字科技有限公司 User interaction system

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114979743A (en) * 2021-02-25 2022-08-30 腾讯科技(深圳)有限公司 Method, device, equipment and medium for displaying audiovisual works
CN114979743B (en) * 2021-02-25 2024-01-16 腾讯科技(深圳)有限公司 Method, device, equipment and medium for displaying audiovisual works
CN113521758A (en) * 2021-08-04 2021-10-22 北京字跳网络技术有限公司 Information interaction method and device, electronic equipment and storage medium
CN113521758B (en) * 2021-08-04 2023-10-24 北京字跳网络技术有限公司 Information interaction method and device, electronic equipment and storage medium
WO2023016176A1 (en) * 2021-08-11 2023-02-16 北京字跳网络技术有限公司 Plot animation playing method, plot animation generation method and apparatus, and terminal and device

Similar Documents

Publication Publication Date Title
CN110841285B (en) Interface element display method and device, computer equipment and storage medium
CN109874312B (en) Method and device for playing audio data
CN110971930A (en) Live virtual image broadcasting method, device, terminal and storage medium
CN108694073B (en) Control method, device and equipment of virtual scene and storage medium
CN109327608B (en) Song sharing method, terminal, server and system
CN109922356B (en) Video recommendation method and device and computer-readable storage medium
CN112181572A (en) Interactive special effect display method and device, terminal and storage medium
CN112044065B (en) Virtual resource display method, device, equipment and storage medium
CN110288689B (en) Method and device for rendering electronic map
CN111437600A (en) Plot showing method, plot showing device, plot showing equipment and storage medium
CN111339938A (en) Information interaction method, device, equipment and storage medium
CN110543350A (en) Method and device for generating page component
CN111459363A (en) Information display method, device, equipment and storage medium
CN111083526B (en) Video transition method and device, computer equipment and storage medium
CN111402844B (en) Song chorus method, device and system
CN110677713B (en) Video image processing method and device and storage medium
CN113867606A (en) Information display method and device, electronic equipment and storage medium
CN112822544B (en) Video material file generation method, video synthesis method, device and medium
CN112023403B (en) Battle process display method and device based on image-text information
CN111368114A (en) Information display method, device, equipment and storage medium
CN108228052B (en) Method and device for triggering operation of interface component, storage medium and terminal
CN110300275B (en) Video recording and playing method, device, terminal and storage medium
CN113485596B (en) Virtual model processing method and device, electronic equipment and storage medium
CN114245218A (en) Audio and video playing method and device, computer equipment and storage medium
CN114595019A (en) Theme setting method, device and equipment of application program and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200724