CN112221138A - Sound effect playing method, device, equipment and storage medium in virtual scene - Google Patents

Sound effect playing method, device, equipment and storage medium in virtual scene Download PDF

Info

Publication number
CN112221138A
CN112221138A CN202011165714.XA CN202011165714A CN112221138A CN 112221138 A CN112221138 A CN 112221138A CN 202011165714 A CN202011165714 A CN 202011165714A CN 112221138 A CN112221138 A CN 112221138A
Authority
CN
China
Prior art keywords
virtual
sound effect
magnitude
volume
quality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011165714.XA
Other languages
Chinese (zh)
Other versions
CN112221138B (en
Inventor
王树强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202011165714.XA priority Critical patent/CN112221138B/en
Publication of CN112221138A publication Critical patent/CN112221138A/en
Application granted granted Critical
Publication of CN112221138B publication Critical patent/CN112221138B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/54Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Stereophonic System (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a sound effect playing method, device, equipment and storage medium of a virtual scene, and belongs to the field of sound effect processing. The method comprises the following steps: displaying a virtual scene picture, wherein the virtual scene picture comprises virtual roles, and the virtual roles carry virtual articles; obtaining attribute parameters of the virtual article in a physical dimension, wherein the attribute parameters of the physical dimension comprise: at least one of mass, quantity, and volume; and responding to the attribute parameters meeting the first matching relationship, and playing the sound effect corresponding to the attribute parameters, wherein the first matching relationship represents the corresponding relationship between the attribute parameters and the sound effect of the virtual article. The sound effects triggered by the virtual characters when carrying virtual articles with different attribute parameters are different, so that the simulation of the real world is realized, and the sound effects of the client are diversified.

Description

Sound effect playing method, device, equipment and storage medium in virtual scene
Technical Field
The present invention relates to the field of sound processing, and in particular, to a sound playing method, device, and apparatus in a sound virtual scene, and a storage medium.
Background
When the user plays the game through the game application program, the terminal used by the user plays the game sound effect. The game sound effect refers to an effect made by sound generated in a game scene, and is generally used to enhance the reality of the scene and create the scene atmosphere.
Taking a game application as an example of a first-person shooting game, the game application sets a corresponding sound effect for an action executed by a virtual character controlled by a user, such as when the virtual character runs in a virtual scene, a terminal plays a sharp footstep sound when the virtual character runs; when the virtual character shoots in the virtual scene, the terminal plays the gunshot; when the virtual character takes a vehicle (automobile) in the virtual scene, the terminal plays the sound effect of the vehicle during driving (engine sound when the automobile is driven).
In the technical scheme, when the virtual character executes the same type of action, the terminal plays the same game sound effect, for example, the sound effect of the virtual character shot by using a pistol is the same as that of the virtual character shot by using a rifle, but the sound of the pistol and the sound of the rifle in the real world are different, so that the simulation effect of the game sound effect presented by the game application program on the real world is poor.
Disclosure of Invention
The embodiment of the application provides a sound effect playing method, a sound effect playing device and a storage medium of a virtual scene, wherein the sound effects triggered by virtual characters when the virtual characters carry virtual articles with different attribute parameters are different by establishing a matching relation between the attribute parameters of the virtual articles carried by the virtual characters in physical dimensions and the sound effects, so that simulation of a real world is realized. The technical scheme is as follows:
according to an aspect of the present application, a sound effect playing method for a virtual scene is provided, the method includes:
displaying a virtual scene picture, wherein the virtual scene picture comprises virtual roles, and the virtual roles carry virtual articles;
obtaining attribute parameters of the virtual article in a physical dimension, wherein the attribute parameters of the physical dimension comprise: at least one of mass, quantity, and volume;
and responding to the attribute parameters meeting a first matching relationship, and playing sound effects corresponding to the attribute parameters, wherein the first matching relationship represents the corresponding relationship between the attribute parameters of the virtual article and the sound effects.
According to another aspect of the present application, there is provided an audio playback apparatus for a virtual scene, the apparatus including:
the display module is used for displaying a virtual scene picture, wherein the virtual scene picture comprises virtual roles, and the virtual roles carry virtual articles;
an obtaining module, configured to obtain attribute parameters of the virtual article in a physical dimension, where the attribute parameters of the physical dimension include: at least one of quantity, mass, and volume;
and the playing module is used for responding to the attribute parameters meeting a first matching relationship and playing the sound effect corresponding to the attribute parameters, wherein the first matching relationship represents the corresponding relationship between the attribute parameters of the virtual article and the sound effect.
According to another aspect of the present application, there is provided a computer device comprising: a processor and a memory, wherein at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the memory, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the sound effect playing method of the virtual scene as described above.
According to another aspect of the present application, there is provided a computer-readable storage medium having stored therein a computer program which is loaded and executed by a processor to implement the sound effect playing method of a virtual scene as described above.
According to another aspect of the application, a computer program product or computer program is provided, comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and executes the computer instructions, so that the computer device executes the sound effect playing method of the virtual scene as described above.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
the first matching relation is established between the attribute parameters of the virtual articles in the physical dimension and the sound effects, so that the client can play the corresponding sound effects according to the attribute parameters of the virtual articles acquired in real time, the sound effects triggered by the virtual characters when carrying the virtual articles with different attribute parameters are different, the simulation of the real world is realized, and the sound effects of the client are diversified.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a block diagram of a terminal provided in an exemplary embodiment of the present application;
FIG. 2 is a flowchart of a sound effect playing method for a virtual scene according to an exemplary embodiment of the present application;
FIG. 3 is a schematic illustration of a listing interface for a virtual item provided by an exemplary embodiment of the present application;
FIG. 4 is a flowchart of a sound effect playing method for a virtual scene according to an exemplary embodiment of the present application;
FIG. 5 is a schematic illustration of a listing interface for a virtual item provided by another exemplary embodiment of the present application;
FIG. 6 is a schematic illustration of an operational interface of an audio engine provided by an exemplary embodiment of the present application;
FIG. 7 is a schematic illustration of a listing interface for a virtual article as provided by another exemplary embodiment of the present application;
FIG. 8 is a schematic illustration of an operational interface of an audio engine provided by another exemplary embodiment of the present application;
FIG. 9 is a schematic illustration of a listing interface for a virtual article as provided by another exemplary embodiment of the present application;
FIG. 10 is a schematic illustration of an operational interface of an audio engine provided by another exemplary embodiment of the present application;
FIG. 11 is a flowchart illustrating an audio effect playing method for a virtual scene according to another exemplary embodiment of the present application;
FIG. 12 is a schematic illustration of a listing interface for a virtual article as provided by another exemplary embodiment of the present application;
FIG. 13 is a schematic illustration of an operational interface of an audio engine provided by another exemplary embodiment of the present application;
FIG. 14 is a flowchart illustrating an audio effect playing method for a virtual scene according to another exemplary embodiment of the present application;
FIG. 15 is a flowchart illustrating an audio effect playing method for a virtual scene according to another exemplary embodiment of the present application;
FIG. 16 is a schematic illustration of an operational interface of an audio engine provided by another exemplary embodiment of the present application;
FIG. 17 is a schematic illustration of an operational interface of an audio engine provided by another exemplary embodiment of the present application;
FIG. 18 is a schematic illustration of an operational interface of an audio engine provided by another exemplary embodiment of the present application;
FIG. 19 is a flowchart of a method for a sound effects engine to construct a matching relationship provided by an exemplary embodiment of the present application;
FIG. 20 is a block diagram of an audio playback device for virtual scenes according to an exemplary embodiment of the present application;
fig. 21 is a schematic device structure diagram of a computer apparatus according to an exemplary embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
First, terms referred to in the embodiments of the present application are described:
virtual scene: is a virtual scene that is displayed (or provided) by an application program when the application program runs on a terminal. The virtual scene may be a simulation scene of a real world, a semi-simulation semi-fictional scene, or a pure fictional scene. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, and a three-dimensional virtual scene, which is not limited in this application. The following embodiments are illustrated with the virtual scene being a three-dimensional virtual scene.
Virtual roles: refers to a movable object in a virtual scene. The movable object can be a virtual character, a virtual animal, an animation character, etc., such as: characters, animals, plants, oil drums, walls, stones and the like displayed in the three-dimensional virtual scene. Optionally, the virtual object is a three-dimensional volumetric model created based on animated skeletal techniques. Each virtual object has its own shape and volume in the three-dimensional virtual scene, occupying a portion of the space in the three-dimensional virtual scene. Virtual objects broadly refer to one or more virtual objects in a virtual scene. Taking a game application as an example, a virtual character is a movable object that a user controls while participating in a game.
Sound effect: refers to effects made by sound, which are typically used to increase the realism of a scene and create the ambiance of the scene. The embodiment of the application takes the sound effect as an example, which is the game sound effect in the game application program, and the game sound effect comprises the following steps: background sound effects (sound effects generated by virtual scenes independently without the help of manpower, such as the sound of blowing leaves), sound effects generated when virtual props are used (such as virtual characters shoot with pistols), sound effects generated when virtual characters perform actions (such as virtual characters perform running actions), and sound when controls are triggered. It should be noted that the sound effect in the embodiment of the present application is a sound effect that is correspondingly generated when the attribute parameters of the virtual item carried by the virtual character are different in physical dimension, that is, a sound effect sent by the virtual character.
First Person shooter game (FPS): the shooting game is a shooting game that a user can play at a first-person angle of view, and the picture of a virtual scene in the game is a picture of observing the virtual scene at the angle of view of a first virtual object. In the game, at least two virtual objects carry out a single game fighting mode in a virtual scene, the virtual objects achieve the purpose of survival in the virtual scene by avoiding attacks initiated by other virtual objects and dangers (such as poison circle, marshland and the like) existing in the virtual scene, when the life value of the virtual objects in the virtual scene is zero, the life of the virtual objects in the virtual scene is ended, and finally the virtual objects which survive in the virtual scene are winners. Optionally, each client may control one or more virtual objects in the virtual scene, with the time when the first client joins the battle as the start time and the time when the last client exits the battle as the end time. Optionally, the competitive mode of the battle may include a single battle mode, a double group battle mode or a multi-person group battle mode, and the battle mode is not limited in the embodiment of the present application.
The method provided in the present application may be applied to a three-dimensional map program, a military simulation program, an FPS game, a Multiplayer Online tactical sports game (MOBA), a Virtual Reality application (VR), an Augmented Reality Application (AR), and the like, and the following embodiments are exemplified by a game application.
The game based on the virtual scene is composed of one or more game world maps, the virtual scene in the game simulates the scene of a real world, a user can control the virtual character in the game to walk, run, jump, shoot, fight, drive, be attacked by other virtual characters (other virtual characters are virtual characters controlled by other users), be injured by the virtual scene, attack other virtual characters and other actions in the virtual scene, the interactivity is strong, and a plurality of users can form a team on line to play a competitive game.
Schematically, when the virtual character executes different actions, the terminal used by the user plays a sound effect corresponding to the action executed by the virtual character; illustratively, in order to aggravate the excitement of the game, the terminal used by the user plays the background sound effect with a faster rhythm.
The embodiment of the application provides a sound effect playing method for a virtual scene, which is characterized in that matching relations between attribute parameters of virtual objects in different dimensions and sound effects are established, so that when virtual objects with different attribute parameters are carried by virtual characters, corresponding sound effects are played by a terminal used by a user, and the sound effects are diversified.
Fig. 1 is a block diagram illustrating an implementation environment provided by an exemplary embodiment of the present application, where the sound effect playing method in a virtual scene provided by the embodiment of the present application is applied to a terminal 100, and the terminal 100 is installed with a client 110 corresponding to an application program supporting a virtual scene to run, where the application program includes a three-dimensional map program, a military simulation program, an FPS game, an MOBA game, a VR application program, and an AR application program. The following embodiments illustrate applications as gaming applications.
In some embodiments, the terminal comprises at least one of a smartphone, a tablet, an MP3 player, an MP4 player, a laptop computer, a desktop computer, and a notebook computer, and the following embodiments are described with the terminal comprising a smartphone as an example.
Taking the client 110 as a client of the game application program for example, the client 110 includes a display unit 111, an obtaining unit 112, a sound effect data packet 113, and a playing unit 114.
The display unit 111 is configured to display a virtual scene picture, where the virtual scene picture includes a virtual character controlled by a user, and the virtual character carries a virtual item, where the virtual item is, illustratively, a virtual item being held by the virtual character, or a virtual item carried in a backpack equipped with the virtual character.
The obtaining unit 112 is configured to obtain, in real time, attribute parameters of virtual articles carried by the virtual character in a physical dimension, where the attribute parameters in the physical dimension include at least one of a quality of the virtual article, a volume of the virtual article, a number of the virtual article, and a type of the virtual article. In some embodiments, the obtaining unit 112 is further configured to obtain the motion state of the virtual character in real time, where the motion state includes at least one of a walking motion state, a running motion state, a jumping motion state, a squat motion state, a lying down motion state, and a sprint motion state.
The sound effect data packet 113 is used for storing the matching relationship between the attribute parameters of the virtual object in the physical dimension and the sound effect, or storing the matching relationship between the motion state of the virtual character, the attribute parameters of the virtual object in the physical dimension and the sound effect. The sound effect data packet 113 is generated by an audio engine (Wwise middleware) that includes a switching function (Switch) for establishing the attribute parameters of the acquired virtual item and the sound effect corresponding to the attribute parameters.
The playing unit 114 is configured to play a sound effect corresponding to the attribute parameter of the virtual item according to the matching relationship in the sound effect data packet, schematically, multiple sound effects are stored in the sound effect data packet, or multiple sound effects are stored in the game client, and the playing unit 114 selects a sound effect meeting the matching relationship from the multiple sound effects according to the matching relationship to play. The timbres of the sound effects are different.
Fig. 2 is a flowchart illustrating a sound effect playing method for a virtual scene according to an exemplary embodiment of the present application, where the method may be applied to the terminal 100 shown in fig. 1, and the terminal 110 is installed with a client 110 supporting virtual scene operation. The method comprises the following steps:
step 201, displaying a virtual scene picture, where the virtual scene picture includes a virtual character, and the virtual character carries a virtual article.
Illustratively, an application program supporting virtual scene operation is installed and operated on a terminal used by a user, and when the user operates the application program, a picture corresponding to the application program is displayed on a display screen of the terminal.
Taking a game application as an example, when the game application is running, a display screen of the terminal displays a virtual scene picture in the game. Illustratively, the virtual scene picture is a picture obtained by observing a virtual scene at a first person perspective of the virtual character, or the virtual scene picture is a picture obtained by observing a virtual scene at a third person perspective of the virtual character.
When a user controls a virtual character to play a game, the virtual character carries a virtual object, the virtual object is usually a virtual prop, and the virtual prop refers to a prop which can be used by the virtual character in a virtual scene. The types of virtual props include: at least one of firearms props, defense props (such as bulletproof vest and shield), cold weapon props (such as knife, gun, sword, halter, axe, tomaxe, hook and fork), accessory props (such as magazine and 8-fold mirror), energy supplement props (such as energy beverage and food), throwing props (such as bomb and flash bomb), and treatment props (such as medicine and gauze).
The virtual character is equipped with a virtual knapsack before entering the virtual scene, or the virtual character enters the virtual scene and acquires the virtual knapsack through a picking mode. The virtual backpack includes backpack grids, each backpack grid for equipping with one or more virtual props. The size of each backpack grid is the same, or the size of each backpack grid is adaptively changed according to the size of the virtual prop. As shown in fig. 3, the user can open the virtual backpack by selecting the backpack bar 21, i.e., displaying the item list corresponding to the virtual backpack. The first knapsack lattice 23 occupied by the missile hanger (which is a bag-type prop for loading bullets, can be hung on the body of the virtual character and is named as a bullet bag) is the largest knapsack lattice among all knapsack lattices, the second knapsack lattice 22 occupied by the military knapsack is smaller than the first knapsack lattice 23 occupied by the missile hanger, and the third knapsack lattice 24 occupied by the triangular handle (which is an accessory type prop capable of being installed on firearms and is convenient for the virtual character to hold) is smaller than the second knapsack lattice 22 occupied by the military knapsack.
The user controls the virtual character to pick up the virtual object in the virtual scene, the picked virtual object is equipped in the backpack lattice of the virtual backpack, and when the virtual backpack has no empty backpack lattice, the user can control the virtual character to discard part of the virtual object so as to ensure that the virtual backpack has the empty backpack lattice.
Further, the virtual scene displayed by the virtual scene screen further includes: at least one element selected from the group consisting of mountains, flat ground, rivers, lakes, oceans, deserts, sky, plants, buildings, and vehicles.
Step 202, obtaining attribute parameters of the virtual article in a physical dimension, wherein the attribute parameters of the physical dimension include: at least one of mass, quantity, and volume.
Each virtual item in the virtual backpack corresponds to a respective attribute parameter in a physical dimension. The attribute parameters of each virtual article in the virtual backpack correspond to mass, quantity and volume, and illustratively, the mass unit and the volume unit of the virtual article are respectively expressed in the mass unit and the volume unit of the article in the real world. For example, a bottle of energy drink has a volume of 500 milliliters (ml) and an apple mass of 150 grams (g).
In this embodiment of the application, the game application program obtains attribute parameters corresponding to all virtual items in the virtual backpack, for example, taking the number of the attribute parameters as an example, obtains the total number of all virtual items in the virtual backpack.
Step 203, responding to the attribute parameters meeting the first matching relationship, and playing the sound effect corresponding to the attribute parameters, wherein the first matching relationship represents the corresponding relationship between the attribute parameters and the sound effect of the virtual article.
The first matching relationship is obtained by a first data packet generated by an audio engine, and a transformation function in the audio engine is used to construct the first matching relationship. Schematically, the audio engine is exemplified as the Wwise middleware. The Wwise middleware is a sound processing tool and can test a plurality of sound effects in game application programs.
Constructing a first matching relation according to the attribute parameters of the virtual articles carried by the virtual characters in physical dimensions and the sound effects corresponding to the attribute parameters through the conversion function of the Wwise middleware, packaging the first matching relation in a first data packet, and embedding the first data packet into the game application program when a developer of the game application program designs the application program. Illustratively, when a game application program embedded into the first data packet is installed on a terminal used by a user, when the game application program runs, attribute parameters of virtual articles carried by virtual characters are acquired in real time, the acquired attribute parameters are matched with the first matching relationship, and the terminal plays sound effects corresponding to the attribute parameters.
In one example, the total number of virtual items carried by the virtual character is 10, and the first matching relationship is that when the total number of virtual items is greater than 5, the sound effect a is played, and then according to the first matching relationship, the terminal used by the user plays the sound effect a.
It should be noted that the virtual object carries virtual articles with different attribute parameters to generate different sound effects, and the sound effects do not belong to the background sound effects in the virtual scene. In contrast to the real life, human beings produce different sounds when transporting different sizes of articles, and generally produce a larger sound when transporting larger sized articles and a smaller sound when transporting smaller sized articles.
In summary, in the method provided in this embodiment, the first matching relationship is established between the attribute parameters of the virtual item and the sound effects, so that the client can play the corresponding sound effects according to the attribute parameters of the virtual item acquired in real time, and the sound effects triggered by the virtual character when carrying the virtual item with different attribute parameters are different, thereby realizing simulation of the real world and diversifying the sound effects of the client.
And explaining the corresponding relation between the attribute parameters of the virtual articles carried by the virtual character and the sound effect by combining the user interface.
1. The attribute parameter of the virtual item comprises a quality of the virtual item.
Fig. 4 shows a flowchart of a sound effect playing method for a virtual scene according to another exemplary embodiment of the present application. The method can be applied to a terminal 100 as shown in fig. 1, and the terminal 110 is installed with a client 110 supporting virtual scene operation. The method comprises the following steps:
step 401, displaying a virtual scene picture, where the virtual scene picture includes a virtual character, and the virtual character carries a virtual article.
Taking a game application as an example, when the game application is started, the terminal displays a virtual scene picture. When the user controls the virtual character to participate in the game, the virtual character is displayed on the virtual scene picture. Illustratively, the virtual character is equipped with a virtual backpack, in which virtual goods are carried, which virtual goods are obtained by the virtual character in the virtual scene by means of picking up.
Step 402, acquiring the quality of the virtual article.
Illustratively, the game application obtains the total mass of the virtual items in the virtual backpack in real time, or the game application obtains the total mass of the virtual items in the virtual backpack at preset time intervals. The total mass of the virtual items is the sum of the masses of the individual virtual items.
And 403, responding to that the quality level corresponding to the quality of the virtual article meets the first matching relation, and playing a sound effect corresponding to the quality level.
The total mass of the virtual articles in the virtual backpack is divided into three mass magnitudes, and corresponding sound effects are matched according to different mass magnitudes.
1) And responding to the fact that the quality of the virtual article belongs to a first quality magnitude, and playing a first sound effect corresponding to the first quality magnitude according to the first matching relation, wherein the first quality magnitude is the quality magnitude corresponding to the condition that the quality of the virtual article is smaller than or equal to a first quality threshold.
As shown in fig. 5, in the backpack grids of the virtual backpack, the size of each backpack grid is consistent, the throwing type prop is located in the backpack grid 31, a first attribute parameter 32 and a second attribute parameter 33 are also displayed in the item list corresponding to the virtual backpack, the first attribute parameter 32 is used for representing the total mass of the virtual items currently equipped in the virtual backpack, and the second attribute parameter 33 is used for representing the total mass of the virtual items that the virtual backpack can accommodate. As shown in fig. 5, a piece of throwing-type prop is equipped in the virtual backpack, the mass of the throwing-type prop is 15 kg, and the total mass of virtual articles contained in the virtual backpack is 100 kg.
Fig. 6 shows an operation interface diagram of an audio engine provided in an exemplary embodiment of the present application. On the left side in the operation interface 30, there is a matching relationship constructed based on the switching function (Switch)35, including a first matching relationship constructed based on the switching function 35, the first matching relationship including a correspondence relationship between the quality 36 of the virtual item and the sound effect. Three subset options are included under the quality 36 of the virtual item, a heavy sound effect (second sound effect), a light sound effect (first sound effect) and a medium quality (third sound effect).
The right side in the operation interface 30 is a graph of the correspondence between the quality level and the sound effect. The abscissa 37 of the graph is the total mass of the virtual items carried by the virtual character, and the ordinate 38 is the sound effect corresponding to the total mass of the virtual items. As can be seen from the graph, when the total mass of the virtual article is between 0 and 40 kg (the first mass level 34, indicated by the bold horizontal line), the terminal plays a light sound effect (the first sound effect) corresponding to the first mass level. The total mass of the currently equipped virtual items, shown in connection with the virtual backpack in fig. 5, is 15 kg, at which point the total mass of the virtual items is at the first mass level 34, and the terminal plays the first sound effect. Wherein the first mass level is the mass level corresponding to the virtual article having a mass less than or equal to a first mass threshold (40 kilograms).
It should be noted that, when the user does not control the virtual character to open the virtual backpack, the game application may also obtain the total mass of the virtual items carried by the virtual character, and it is convenient to understand the corresponding relationship between the total mass of the virtual items carried in the virtual backpack and the sound effect by using the display interface of fig. 5.
2) And responding to the fact that the quality of the virtual article belongs to a second quality magnitude, and playing a second sound effect corresponding to the second quality magnitude according to the first matching relation, wherein the second quality magnitude is the quality magnitude corresponding to the condition that the quality of the virtual article is larger than or equal to a second quality threshold.
As shown in fig. 7, in the backpack grids of the virtual backpack, each backpack grid is of the same size, the throwing type prop is located in the first backpack grid 41, and the virtual backpack includes three types of energy supplement type props, wherein the first type of energy supplement type prop is located in the second backpack grid 42, illustratively, a plurality of the first type of energy supplement type props are located in a plurality of the second backpack grids 42, that is, each second backpack grid 42 includes one first type of energy supplement type prop. A second type of energy supplement type prop is located in a third backpack grid 43 and a fourth type of energy supplement type prop is located in a fourth backpack grid 44. Virtual backpack the total mass of the virtual article currently equipped is 80.55 kilograms.
As shown in fig. 8, a first matching relationship between the quality 36 of the virtual item constructed based on the conversion function 35 and the sound effect is also displayed on the left side in the operation interface 40 of the audio engine, and a graph of the correspondence relationship between the quality level and the sound effect is displayed on the right side of the operation interface 40. As can be seen from the graph, when the total mass of the virtual article is between 75 kg and 100 kg (second mass level 45, indicated by the bold horizontal line), the terminal plays the heavy sound effect (second sound effect) corresponding to the second mass level. The total mass of the currently equipped virtual items, as displayed in connection with the virtual backpack in fig. 7, is 88.55 kg, at which time the total mass of the virtual items is at the second mass level 45, and the terminal plays the second sound effect. Wherein the second mass level is the corresponding mass level when the mass of the virtual item is greater than or equal to the second mass threshold (75 kilograms).
3) And responding to the fact that the quality of the virtual article belongs to a third quality magnitude, playing a third sound effect corresponding to the third quality magnitude according to the first matching relation, wherein the third quality magnitude is the quality magnitude corresponding to the condition that the quality of the virtual article is larger than the first quality threshold and smaller than the second quality threshold.
As shown in fig. 9, in the backpack grids of the virtual backpack, each backpack grid has the same size, the throwing type prop is located in the first backpack grid 51, and the energy supplementing type prop is located in the second backpack grid 52, and illustratively, a plurality of energy supplementing type props are located in the plurality of second backpack grids 52, that is, each second backpack grid 52 includes one energy supplementing type prop. Virtual backpack the total mass of virtual items currently equipped is 63.10 kilograms.
As shown in fig. 10, a first matching relationship between the quality 36 of the virtual item constructed based on the conversion function 35 and the sound effect is also displayed on the left side in the operation interface 50 of the audio engine, and a graph of the correspondence relationship between the quality level and the sound effect is displayed on the right side of the operation interface 50. As can be seen from the graph, when the total mass of the virtual article is between 40 kg and 75 kg (third mass level 53, indicated by the bold horizontal line), the terminal plays a medium sound effect (third sound effect) corresponding to the third mass level. The total mass of the currently equipped virtual items, as displayed in connection with the virtual backpack in fig. 9, is 63.10 kg, at which time the total mass of the virtual items is at the third mass level 53, and the terminal plays the third sound effect. Wherein the third mass level is the mass level corresponding to the virtual article having a mass greater than the first mass threshold (40 kg) and less than the second mass threshold (75 kg).
It should be noted that the timbres of the first sound effect, the second sound effect and the third sound effect are different from each other. The timbre refers to different characteristics of different sounds in terms of waveform, different object vibrations have different characteristics, and different sounding bodies make different sounds due to different materials and structures. The first sound effect, the second sound effect and the third sound effect are sound effects stored in the audio engine, or the first sound effect, the second sound effect and the third sound effect are sound effect materials collected in advance, and a person skilled in the art can guide the collected sound effect materials into the audio engine to construct a matching relationship.
In summary, in the method provided in this embodiment, by establishing the first matching relationship between the quality level of the virtual item and the sound effect, when the quality of the virtual item carried by the virtual character satisfies a certain quality level, the client plays the sound effect corresponding to the quality level of the virtual item, so that the sound effects corresponding to the virtual character when carrying virtual items of different qualities are different, thereby implementing simulation of the real world and diversifying the sound effect of the client.
By dividing the quality magnitude of the virtual articles, when the virtual characters carry virtual articles with different qualities, the client can accurately play the sound effect corresponding to the quality magnitude, so that the simulation of the real world is realized, and the sound effect of the client is diversified.
2. The attribute parameter of the virtual item includes the number of virtual items.
Fig. 11 is a flowchart illustrating a sound effect playing method for a virtual scene according to another exemplary embodiment of the present application. The method can be applied to a terminal 100 as shown in fig. 1, and the terminal 110 is installed with a client 110 supporting virtual scene operation. The method comprises the following steps:
step 1101, displaying a virtual scene picture, wherein the virtual scene picture comprises a virtual character, and the virtual character carries a virtual article.
The implementation of step 1101 can refer to the implementation of step 201 in the embodiment shown in fig. 2 and the implementation of step 401 in the embodiment shown in fig. 4, and is not described herein again.
Step 1102, obtain the number of virtual items.
Illustratively, the game application obtains the total number of virtual items in the virtual backpack in real time, or the game application obtains the total number of virtual items in the virtual backpack at preset time intervals. The total number of virtual items is the sum of the number of individual virtual items.
Step 1103, in response to that the quantity magnitude corresponding to the quantity of the virtual items meets the first matching relationship, playing a sound effect corresponding to the quantity magnitude.
It will be appreciated that, referring to step 403 in the embodiment shown in fig. 4, the total number of virtual items in the virtual backpack is also divided into three magnitude levels, and the corresponding sound effects are matched according to the different magnitude levels.
1) And responding to the fact that the number of the virtual articles belongs to a first number magnitude, and playing a first sound effect corresponding to the first number magnitude according to the first matching relation, wherein the first number magnitude is the number magnitude corresponding to the situation that the number of the virtual articles is smaller than or equal to a first number threshold.
As shown in fig. 12, in the backpack grid of the virtual backpack, each backpack grid is of uniform size, the throwing type prop is located in a first backpack grid 61, the first type of energy supplementing type prop is located in a second backpack grid 62, and the second type of energy supplementing type prop is located in a third backpack grid 63. The energy supplement props of the same type are all located in the same backpack grid, and the number of the energy supplement props is displayed below the backpack grid, for example, the number of the energy supplement props of the first type is 7, and the number of the energy supplement props of the second type is 4. Illustratively, the capacity of each backpack compartment is unlimited, i.e., each backpack compartment can hold an unlimited number of virtual items (virtual props) of the same type; illustratively, the capacity of each backpack grid is limited, i.e. each backpack grid can accommodate a limited number of virtual items of the same type, e.g. a backpack grid has a capacity of 20, then when the virtual character has 21 energy replenishment type props of the first type, the first type energy replenishment type props occupy two backpack grids.
Also displayed in the item list corresponding to the virtual backpack are a first attribute parameter 64 and a second attribute parameter 65, where the first attribute parameter 64 is used to indicate the total number of virtual items currently equipped in the virtual backpack, and the second attribute parameter 65 is used to indicate the total number of virtual items that can be accommodated in the virtual backpack. As shown in fig. 12, a virtual backpack is equipped with 12 virtual props, and the total number of virtual items that the virtual backpack can accommodate is 100.
Fig. 13 is a schematic view illustrating an operation interface of an audio engine according to another exemplary embodiment of the present application. Also, on the left side in the operation interface 60 is a matching relationship constructed based on the Switch function (Switch)35, including a first matching relationship constructed based on the Switch function 35, the first matching relationship including a correspondence relationship between the number of virtual package 66 and the sound effect. The first matching relationship includes a correspondence between the number of virtual items 66 and the sound effect. Three subset options are included under the number of virtual items 66, heavy audio (second audio), light audio (first audio) and medium quality (third audio).
On the right side of the operation interface 60, there is a curve diagram corresponding to the number magnitude and the sound effect, and it can be known from the curve diagram that when the total number of the virtual items is between 0 and 25 (the first magnitude 67, indicated by the bold horizontal line), the terminal plays the light sound effect (the first sound effect) corresponding to the first number magnitude. The total number of virtual items currently equipped is shown as 12 in conjunction with the virtual backpack in fig. 12, at which time the total number of virtual items is at the first quantity level 67, and the terminal plays the first sound effect. Wherein the first quantity magnitude is a quantity magnitude corresponding to when the quantity of virtual items is less than or equal to a first quantity threshold (25).
It should be noted that, when the user does not control the virtual character to open the virtual backpack, the game application may also obtain the total number of virtual items carried by the virtual character, and the display interface of fig. 12 is illustrated to facilitate understanding of the corresponding relationship between the total number of virtual items carried in the virtual backpack and the sound effect.
2) And responding to the fact that the number of the virtual articles belongs to a second quantity magnitude, and playing a second sound effect corresponding to the second quantity magnitude according to the first matching relation, wherein the second quantity magnitude is the quantity magnitude corresponding to the situation that the number of the virtual articles is larger than or equal to a second quantity threshold value.
When the total number of virtual items is between 75 and 100, the terminal plays a light sound effect (first sound effect) corresponding to the second numerical order. The second quantity order is that the number of virtual items is greater than or equal to a second quantity threshold (75).
3) And responding to the fact that the number of the virtual articles belongs to a third number magnitude, playing a third sound effect corresponding to the third number magnitude according to the first matching relation, wherein the third number magnitude is the number magnitude corresponding to the case that the number of the virtual articles is larger than the first number threshold and smaller than the second number threshold.
When the total number of virtual items is between 25 and 75, the terminal plays a heavy sound effect (second sound effect) corresponding to a third magnitude of the number. The third quantity order is that the number of virtual items is greater than the first quantity threshold (25) and less than the second quantity threshold (75).
It should be noted that the timbres of the first sound effect, the second sound effect and the third sound effect are different from each other. The timbre refers to different characteristics of different sounds in terms of waveform, different object vibrations have different characteristics, and different sounding bodies make different sounds due to different materials and structures. The first sound effect, the second sound effect and the third sound effect are sound effects stored in the audio engine, or the first sound effect, the second sound effect and the third sound effect are sound effect materials collected in advance, and a person skilled in the art can guide the collected sound effect materials into the audio engine to construct a matching relationship.
In summary, in the method provided in this embodiment, by establishing the first matching relationship between the quantity magnitude of the virtual items and the sound effects, when the quantity of the virtual items carried by the virtual character satisfies a certain quantity magnitude, the client plays the sound effects corresponding to the quantity magnitude of the virtual items, so that the sound effects corresponding to the virtual character when carrying different quantities of virtual items are different, thereby implementing simulation of the real world and diversifying the sound effects of the client.
By dividing the quantity magnitude of the virtual articles, when the virtual characters carry different quantities of virtual articles, the client can accurately play sound effects corresponding to the quantity magnitude, so that the simulation of the real world is realized, and the sound effects presented by the client are diversified.
3. The attribute parameter of the virtual item includes a volume of the virtual item.
Fig. 14 is a flowchart illustrating a sound effect playing method for a virtual scene according to another exemplary embodiment of the present application. The method can be applied to a terminal 100 as shown in fig. 1, and the terminal 110 is installed with a client 110 supporting virtual scene operation. The method comprises the following steps:
step 1401, displaying a virtual scene image, where the virtual scene image includes a virtual character, and the virtual character carries a virtual article.
The implementation of step 1401 may refer to the implementation of step 201 in the embodiment shown in fig. 2 and the implementation of step 401 in the embodiment shown in fig. 4, and is not described herein again.
At step 1402, a volume of the virtual item is obtained.
Illustratively, the game application obtains the total volume of the virtual items in the virtual backpack in real time, or the game application obtains the total volume of the virtual items in the virtual backpack at preset time intervals. The total volume of the virtual items is the sum of the volumes of the individual virtual items.
And 1403, responding to the fact that the volume magnitude corresponding to the volume of the virtual article meets the first matching relation, and playing a sound effect corresponding to the volume magnitude.
It can be understood that, referring to step 403 in the embodiment shown in fig. 4, the total volume of the virtual items in the virtual backpack is also divided into three volume levels, and the corresponding sound effects are matched according to different volume levels.
1) And responding to the fact that the volume of the virtual article belongs to a first volume magnitude, playing a first sound effect corresponding to the first volume magnitude according to a first matching relation, wherein the first volume magnitude is the corresponding volume magnitude when the volume of the virtual article is smaller than or equal to a first volume threshold value.
When the total volume of the virtual item is between V1 and V2 (cubic decimeter), the terminal plays a light sound effect (first sound effect) corresponding to the first volume level. The first volume level is that the volume of the virtual item is less than or equal to a first volume threshold (V2). V1 and V2 are both positive numbers, and V1 < V2.
2) And responding to the volume of the virtual article belonging to the second volume magnitude, and playing a second sound effect corresponding to the second volume magnitude according to the first matching relation, wherein the second volume magnitude is the volume magnitude corresponding to the virtual article when the volume is larger than or equal to the second volume threshold.
When the total volume of the virtual item is between V3 and V4 (cubic decimeter), the terminal plays a heavy sound effect (second sound effect) corresponding to the second volume level. The second volume quantity level is that the volume of the virtual item is greater than or equal to a second volume threshold (V3). V3 and V4 are both positive numbers, and V3 < V4.
3) And responding to the fact that the volume of the virtual article belongs to a third volume magnitude, playing a third sound effect corresponding to the third volume magnitude according to the first matching relation, wherein the third volume magnitude is the corresponding volume magnitude when the volume of the virtual article is larger than the first volume threshold and smaller than the second volume threshold.
When the total volume of the virtual item is between V2 and V3 (cubic decimeters), the terminal plays a medium sound effect (third sound effect) corresponding to the third volume level. The first volume level is that the volume of the virtual item is greater than a first volume threshold (V2) and less than a second volume threshold (V3). Wherein, the timbres of the first sound effect, the second sound effect and the third sound effect are different. V2 and V3 are both positive numbers, and V2 < V3.
The above embodiments corresponding to the quality of the virtual articles carried by the virtual character, the embodiments corresponding to the number of the virtual articles carried by the virtual character, and the embodiments corresponding to the volume of the virtual articles carried by the virtual character may be implemented individually, or implemented in any two-by-two combination, or implemented in all combinations.
In summary, in the method provided in this embodiment, by establishing the first matching relationship between the volume magnitude of the virtual item and the sound effect, when the volume of the virtual item carried by the virtual character satisfies a certain volume magnitude, the client plays the sound effect corresponding to the volume magnitude of the virtual item, so that the sound effects corresponding to the virtual character when carrying virtual items with different volumes are different, thereby realizing simulation of the real world and diversifying the sound effects of the client.
By dividing the volume magnitude of the virtual articles, when the virtual characters carry virtual articles with different volumes, the client can accurately play the sound effect corresponding to the volume magnitude, so that the simulation of the real world is realized, and the sound effect of the client is diversified.
The first matching relation is generated through the audio engine, so that when the client side obtains the attribute parameters of the virtual article, the corresponding sound effect can be accurately determined through the first matching relation, the simulation of the real world is realized, and the sound effect played by the client side is diversified.
It should be noted that the virtual object carries virtual objects with different attribute parameters, and generates different sound effects. The sound effect in the embodiment of the application belongs to the sound effect generated when the virtual object carries an article, and does not belong to the background sound effect in the virtual scene.
Taking a game application program as an example, when a user controls a virtual character to play a game, the virtual character moves in a virtual scene, and a second matching relationship is established between the attribute parameters of virtual articles carried by the virtual character in the moving state and sound effects, so that the sound effects played by a client are different even if the virtual character is equipped with the virtual articles with the same attribute parameters in different states.
Fig. 15 is a flowchart illustrating a sound effect playing method for a virtual scene according to another exemplary embodiment of the present application. The method can be applied to a terminal 100 as shown in fig. 1, and the terminal 110 is installed with a client 110 supporting virtual scene operation. The method comprises the following steps:
step 1501, displaying a virtual scene picture, wherein the virtual scene picture comprises virtual roles, and the virtual roles carry virtual articles.
The implementation of step 1501 can be referred to the implementation of step 201 in the embodiment shown in fig. 2 and the implementation of step 401 in the embodiment shown in fig. 4, and is not described herein again.
Step 1502, responding to the movement of the virtual character in the virtual scene, obtaining a second matching relationship corresponding to the movement state, wherein the second matching relationship represents the corresponding relationship between the attribute parameters of the virtual article carried by the virtual character in the movement state and the sound effect, and the attribute parameters are the attribute parameters of the physical dimension.
Illustratively, the running state includes at least one of a walking state, a running state, a jumping state, a squatting state, a lying state, and a sprint state.
The second matching relationship is obtained by a second data packet generated by an audio engine, a transformation function in the audio engine being used to construct the second matching relationship. Schematically, the audio engine is exemplified as the Wwise middleware. The Wwise middleware is a sound processing tool and can test a plurality of sound effects in game application programs.
And constructing a second matching relationship according to the attribute parameters of the virtual articles carried by the virtual characters in the physical dimension and the sound effects corresponding to the attribute parameters through the conversion function of the Wwise middleware, packaging the second matching relationship in a second data packet, and embedding the second data packet into the game application program when a developer of the game application program designs the application program. Illustratively, when the game application program embedded into the second data packet is installed on the terminal used by the user, when the game application program runs, the attribute parameters of the virtual article carried by the virtual character in the motion state are acquired in real time, the acquired attribute parameters are matched with the second data packet, and the terminal plays the sound effect corresponding to the attribute parameters.
As shown in fig. 16, the quality of the virtual goods is displayed in the operation interface 171 of the audio engine, and the correspondence between the quality of the virtual goods and the sound effect is constructed; as shown in fig. 18, in the edit interface 173 of the game parameter performance, each quality level of the virtual good can be set by the range setting section 174; as shown in fig. 17, the correspondence between the attribute parameters of the virtual items carried by the virtual character in various motion states and the sound effects is displayed on the operation interface 172 of the audio engine.
The process of the audio engine building the second matching relationship is shown in fig. 19, taking the motion state as a walking state and taking the quality of the virtual article carried in the attribute parameter as a walking state as an example for explanation, when the user controls the virtual character to play a game, the client of the game application program obtains the motion state 161 of the virtual character, and introduces the preset motion state into the audio engine, schematically, the virtual character is in the walking state 162, the client detects the quality 163 of the virtual article equipped in the virtual character in the walking state, the corresponding relationship between the quality of the virtual article carried in the virtual character in the walking state and the sound effect is the second matching relationship 164, and the conversion function in the audio engine is used to build the second matching relationship, that is, the sound effect corresponding to the quality of the virtual article carried in the walking state is matched.
And 1503, playing a sound effect corresponding to the attribute parameter according to the second matching relationship and the attribute parameter of the virtual article carried by the virtual character in the motion state.
And when the virtual character carries virtual articles with the same quality but is in different motion states, playing different sound effects, for example, when the virtual character is in a walking state and the quality of the virtual articles carried by the virtual character belongs to a first quality order, playing a sound effect a corresponding to the quality of the virtual articles according to a second matching relationship. And when the virtual character is in a running state and the quality of the virtual article carried by the virtual character belongs to the first quality level, playing a sound effect c corresponding to the quality of the virtual article according to the second matching relation. The sound effect a and the sound effect c are different in tone color.
It is understood that the same is also implemented in the embodiments of the above method embodiments when the attribute parameter of the virtual article is the volume or the attribute parameter of the virtual article is the number.
The embodiment shown in fig. 15, the embodiment shown in fig. 4, the embodiment shown in fig. 11, and the embodiment shown in fig. 14 may be implemented individually, or may be implemented in combination of two or three, or may be implemented in combination of all.
In summary, in the method provided in this embodiment, by establishing the motion state of the acquired virtual character and the attribute parameters of the virtual articles carried by the virtual character in the motion state in the physical dimension, and establishing the second matching relationship between the attribute parameters of the virtual character in a certain motion state and the sound effects, the virtual articles carried by the virtual character in different motion states can also generate different sound effects, for example, the sound effects played by the terminal are different when the virtual articles carry the same attribute parameters in a running state and a walking state. The simulation of the real world is realized, and the sound effect played by the client is diversified.
The second matching relationship is generated through the audio engine, so that when the client acquires the attribute parameters of the virtual articles carried by the virtual character in the motion state, the corresponding sound effect can be accurately determined through the second matching relationship, the simulation of the real world is realized, and the sound effect played by the client is diversified.
It should be noted that the virtual object carries virtual objects with different attribute parameters, and generates different sound effects. The sound effect in the embodiment of the application belongs to the sound effect generated when the virtual object carries an article, and does not belong to the background sound effect in the virtual scene.
The above embodiments describe the above method based on the application scenario of the game, and the following describes the above method by way of example in the application scenario of military simulation.
The simulation technology is a model technology which reflects system behaviors or processes by simulating real world experiments by using software and hardware.
The military simulation program is a program specially constructed for military application by using a simulation technology, and is used for carrying out quantitative analysis on sea, land, air and other operational elements, weapon equipment performance, operational actions and the like, further accurately simulating a battlefield environment, presenting a battlefield situation and realizing the evaluation of an operational system and the assistance of decision making.
In one example, soldiers establish a virtual battlefield at a terminal where military simulation programs are located and fight in a team. The soldier controls the virtual character in the virtual battlefield scene to perform at least one operation of standing, squatting, sitting, lying on the back, lying on the stomach, lying on the side, walking, running, climbing, driving, shooting, throwing, attacking, injuring, reconnaissance, close combat and other actions under the virtual battlefield scene. The virtual scene of battlefield includes: at least one natural form of flat ground, mountains, plateaus, basins, deserts, rivers, lakes, oceans and vegetation, and site forms of buildings, vehicles, ruins, training fields and the like. The virtual roles include: virtual characters, virtual animals, cartoon characters and the like, wherein each virtual character has a shape and a volume in a three-dimensional virtual scene and occupies a part of space in the three-dimensional virtual scene.
Based on the above, in one example, soldier a controls avatar a, soldier B controls avatar B, and soldier a and soldier B do not belong to soldiers in the same team. When the battle starts, the soldier a controls the virtual character to walk in the virtual scene, the military simulation program acquires the quality m of the virtual article carried by the virtual character a in the walking state in real time, and the sound effect P corresponding to the quality m of the virtual article carried in the walking state is played according to a data packet which is arranged in the military simulation program and contains the matching relation; the soldier b controls the virtual character to run in the virtual scene, the military simulation program acquires the quality n of the virtual article carried by the virtual character a in the walking state in real time, and a sound effect Q corresponding to the quality n of the virtual article carried in the walking state is played according to a data packet which is arranged in the military simulation program and contains a matching relation; the sound colors of the sound effect P and the sound effect Q are different.
In summary, in this embodiment, the sound effect playing method of the virtual scene is applied to a military simulation program, and a soldier can judge the attribute parameters of the carried virtual objects in the physical dimension through sound effects without opening a virtual backpack, so that the operation efficiency of the soldier is improved, and a real simulation is performed on an actual operation field, so that the soldier can be trained better.
The following are embodiments of the apparatus of the present application, and for details that are not described in detail in the embodiments of the apparatus, reference may be made to corresponding descriptions in the above method embodiments, and details are not described herein again.
Fig. 20 is a schematic structural diagram illustrating an audio effect playing apparatus for a virtual scene according to an exemplary embodiment of the present application. The apparatus can be implemented as all or a part of a terminal by software, hardware or a combination of both, and includes:
a display module 2010, configured to display a virtual scene picture, where the virtual scene picture includes a virtual character, and the virtual character carries a virtual article;
the obtaining module 2020 is configured to obtain attribute parameters of the virtual article in a physical dimension, where the attribute parameters of the physical dimension include: at least one of mass, quantity, and volume;
the playing module 2030, configured to play the sound effect corresponding to the attribute parameter in response to that the attribute parameter satisfies a first matching relationship, where the first matching relationship represents a corresponding relationship between the attribute parameter of the virtual article and the sound effect.
In an alternative embodiment, the attribute parameter of the virtual article comprises a quality of the virtual article; the playing module 2030 is configured to play a sound effect corresponding to the quality level in response to that the quality level corresponding to the quality of the virtual item satisfies the first matching relationship.
In an optional embodiment, the playing module 2030 is configured to, in response to that the quality of the virtual item belongs to a first quality level, play a first sound effect corresponding to the first quality level according to a first matching relationship, where the first quality level is a quality level corresponding to the virtual item when the quality is less than or equal to a first quality threshold; responding to the fact that the quality of the virtual article belongs to a second quality magnitude, playing a second sound effect corresponding to the second quality magnitude according to the first matching relation, wherein the second quality magnitude is the quality magnitude corresponding to the condition that the quality of the virtual article is larger than or equal to a second quality threshold; responding to the fact that the quality of the virtual article belongs to a third quality magnitude, playing a third sound effect corresponding to the third quality magnitude according to the first matching relation, wherein the third quality magnitude is the quality magnitude corresponding to the virtual article when the quality of the virtual article is larger than the first quality threshold and smaller than the second quality threshold; wherein, the timbres of the first sound effect, the second sound effect and the third sound effect are different.
In an alternative embodiment, the attribute parameter of the virtual item comprises a number of the virtual item; the playing module 2030 is configured to play the sound effect corresponding to the quantity order in response to that the quantity order corresponding to the quantity of the virtual items satisfies the first matching relationship.
In an optional embodiment, the playing module 2030 is configured to, in response to that the number of virtual items belongs to a first number magnitude, play a first sound effect corresponding to the first number magnitude according to a first matching relationship, where the first number magnitude is a number magnitude corresponding to when the number of virtual items is smaller than or equal to a first number threshold; responding to the fact that the number of the virtual articles belongs to a second quantity magnitude, playing a second sound effect corresponding to the second quantity magnitude according to the first matching relation, wherein the second quantity magnitude is the quantity magnitude corresponding to the situation that the number of the virtual articles is larger than or equal to a second quantity threshold; responding to the fact that the number of the virtual articles belongs to a third number magnitude, playing a third sound effect corresponding to the third number magnitude according to the first matching relation, wherein the third number magnitude is the number magnitude corresponding to the virtual articles when the number of the virtual articles is larger than the first number threshold and smaller than the second number threshold; wherein, the timbres of the first sound effect, the second sound effect and the third sound effect are different.
In an alternative embodiment, the attribute parameter of the virtual item comprises a volume of the virtual item; the playing module 2030 is configured to play a sound effect corresponding to the volume magnitude in response to that the volume magnitude corresponding to the volume of the virtual item satisfies the first matching relationship.
In an optional embodiment, the playing module 2030 is configured to, in response to that the volume of the virtual item belongs to a first volume magnitude, play a first sound effect corresponding to a first volume magnitude according to a first matching relationship, where the first volume magnitude is a volume magnitude corresponding to the volume of the virtual item being smaller than or equal to a first volume threshold; responding to the fact that the volume of the virtual article belongs to a second volume magnitude, playing a second sound effect corresponding to the second volume magnitude according to the first matching relation, wherein the second volume magnitude is the volume magnitude corresponding to the virtual article when the volume of the virtual article is larger than or equal to a second volume threshold; responding to the fact that the volume of the virtual article belongs to a third volume magnitude, playing a third sound effect corresponding to the third volume magnitude according to the first matching relation, wherein the third volume magnitude is the corresponding volume magnitude when the volume of the virtual article is larger than the first volume threshold and smaller than the second volume threshold; wherein, the timbres of the first sound effect, the second sound effect and the third sound effect are different.
In an optional embodiment, the obtaining module 2020 is configured to, in response to a virtual character moving in a virtual scene, obtain a second matching relationship corresponding to a moving state, where the second matching relationship represents a corresponding relationship between attribute parameters and sound effects of a virtual article carried by the virtual character in the moving state; the playing module 2030 is configured to play a sound effect corresponding to the attribute parameter according to the second matching relationship and the attribute parameter of the virtual item carried by the virtual character in the motion state.
In an alternative embodiment, the exercise state includes at least one of a walking state, a running state, a jumping state, a squatting state, a lying state, and a sprint state.
In an alternative embodiment, the first matching relationship is obtained by a first data packet generated by an audio engine, and a transformation function in the audio engine is used to construct the first matching relationship.
In an alternative embodiment, the second matching relationship is obtained by a second data packet generated by an audio engine, and a transformation function in the audio engine is used to construct the second matching relationship.
In summary, in the apparatus provided in this embodiment, a first matching relationship is established between the attribute parameters of the virtual object in the physical five toxins and the sound effects, so that the client can play the corresponding sound effects according to the attribute parameters of the virtual object obtained in real time, and the sound effects triggered by the virtual character when carrying the virtual object with different attribute parameters are different, thereby realizing simulation of the real world and diversifying the sound effects of the client.
By establishing the first matching relation between the quality magnitude of the virtual article and the sound effect, when the quality of the virtual article carried by the virtual character meets a certain quality magnitude, the client plays the sound effect corresponding to the quality magnitude of the virtual article, so that the sound effects corresponding to the virtual character when carrying virtual articles with different qualities are different, the simulation of the real world is realized, and the sound effect of the client is diversified.
By dividing the quality magnitude of the virtual articles, when the virtual characters carry virtual articles with different qualities, the client can accurately play the sound effect corresponding to the quality magnitude, so that the simulation of the real world is realized, and the sound effect of the client is diversified.
By establishing the first matching relation between the quantity magnitude of the virtual articles and the sound effects, when the quantity of the virtual articles carried by the virtual character meets a certain quantity magnitude, the sound effects corresponding to the quantity magnitude of the virtual articles are played by the client, so that the sound effects corresponding to the virtual character carrying different quantities of the virtual articles are different, the simulation of the real world is realized, and the sound effects of the client are diversified.
By dividing the quantity magnitude of the virtual articles, when the virtual characters carry different quantities of virtual articles, the client can accurately play sound effects corresponding to the quantity magnitude, so that the simulation of the real world is realized, and the sound effects presented by the client are diversified.
By establishing the first matching relation between the volume magnitude of the virtual article and the sound effect, when the volume of the virtual article carried by the virtual character meets a certain volume magnitude, the sound effect corresponding to the volume magnitude of the virtual article is played by the client, so that the sound effects corresponding to the virtual character carrying the virtual articles with different volumes are different, the simulation of the real world is realized, and the sound effect of the client is diversified.
By dividing the volume magnitude of the virtual articles, when the virtual characters carry virtual articles with different volumes, the client can accurately play the sound effect corresponding to the volume magnitude, so that the simulation of the real world is realized, and the sound effect of the client is diversified.
The first matching relation is generated through the audio engine, so that when the client side obtains the attribute parameters of the virtual article, the corresponding sound effect can be accurately determined through the first matching relation, the simulation of the real world is realized, and the sound effect played by the client side is diversified.
By establishing and acquiring the motion state of the virtual character and the attribute parameters of the virtual articles carried by the virtual character in the motion state, and establishing a second matching relationship between the attribute parameters of the virtual character in a certain motion state and the sound effects, the virtual articles carried by the virtual character in different motion states can generate different sound effects, for example, the sound effects played by the terminal are different when the virtual articles carrying the same attribute parameters are carried in a running state and a walking state. The simulation of the real world is realized, and the sound effect played by the client is diversified.
The second matching relationship is generated through the audio engine, so that when the client acquires the attribute parameters of the virtual articles carried by the virtual character in the motion state, the corresponding sound effect can be accurately determined through the second matching relationship, the simulation of the real world is realized, and the sound effect played by the client is diversified.
It should be noted that: the sound effect playing device of the virtual scene provided in the above embodiment is exemplified by only the division of the above functional modules, and in practical applications, the above function distribution can be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the above described functions. In addition, the sound effect playing device of the virtual scene and the sound effect playing method embodiment of the virtual scene provided by the above embodiments belong to the same concept, and the specific implementation process thereof is detailed in the method embodiment and is not repeated here.
Referring to fig. 21, a block diagram of a computer device 2100, according to an example embodiment of the present application, is shown. The computer device 2100 may be a portable mobile terminal such as: smart phones, tablet computers, MP3 players (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), MP4 players (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4). Computer device 2100 may also be referred to by other names such as user equipment, portable terminal, and the like.
Generally, computer device 2100 includes: a processor 2101 and a memory 2102.
The processor 2101 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 2101 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 2101 may also include a main processor and a coprocessor, the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 2101 may be integrated with a GPU (Graphics Processing Unit) which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 2101 may also include an AI (Artificial Intelligence) processor to process computational operations related to machine learning.
The memory 2102 may include one or more computer-readable storage media, which may be tangible and non-transitory. The memory 2102 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 2102 is used to store at least one instruction for execution by the processor 2101 to implement the sound effect playing method of the virtual scene provided in the embodiments of the present application.
In some embodiments, computer device 2100 may also optionally include: a peripheral interface 2103 and at least one peripheral. Specifically, the peripheral device includes: at least one of a radio frequency circuit 2104, a touch display screen 2105, a camera assembly 2106, an audio circuit 2107, a positioning assembly 2108, and a power supply 2109.
The peripheral interface 2103 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 2101 and the memory 2102. In some embodiments, the processor 2101, memory 2102 and peripheral interface 2103 are integrated on the same chip or circuit board; in some other embodiments, any one or both of the processor 2101, the memory 2102 and the peripheral interface 2103 may be implemented on separate chips or circuit boards, which is not limited by this embodiment.
The Radio Frequency circuit 2104 is used to receive and transmit RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 2104 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 2104 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuitry 2104 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, etc. The radio frequency circuitry 2104 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi, wireless fidelity) networks. In some embodiments, the radio frequency circuit 2104 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The touch display screen 2105 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. Touch display screen 2105 also has the ability to capture touch signals on or over the surface of touch display screen 2105. The touch signal may be input as a control signal to the processor 2101 for processing. The touch display 2105 is used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the touch display screen 2105 may be one, providing the front panel of the computer device 2100; in other embodiments, the touch display screen 2105 can be at least two, each disposed on a different surface of the computer device 2100 or in a folded design; in still other embodiments, the touch display 2105 may be a flexible display, disposed on a curved surface or on a folded surface of the computer device 2100. Even the touch display screen 2105 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The touch Display screen 2105 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or other materials.
The camera assembly 2106 is used to capture images or video. Optionally, camera head assembly 2106 includes a front camera and a rear camera. Generally, a front camera is used for realizing video call or self-shooting, and a rear camera is used for realizing shooting of pictures or videos. In some embodiments, the number of the rear cameras is at least two, and each of the rear cameras is any one of a main camera, a depth-of-field camera and a wide-angle camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting function and a VR (Virtual Reality) shooting function. In some embodiments, camera head assembly 2106 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 2107 is used to provide an audio interface between a user and the computer device 2100. The audio circuitry 2107 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals into the processor 2101 for processing, or inputting the electric signals into the radio frequency circuit 2104 to realize voice communication. The microphones may be multiple and placed at different locations on the computer device 2100 for stereo sound acquisition or noise reduction purposes. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert the electrical signals from the processor 2101 or the radio frequency circuit 2104 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 2107 may also include a headphone jack.
The positioning component 2108 is used to locate the current geographic Location of the computer device 2100 for navigation or LBS (Location Based Service). The Positioning component 2108 may be a Positioning component based on the united states GPS (Global Positioning System), the chinese beidou System, or the russian galileo System.
The power supply 2109 is used to power the various components in the computer device 2100. The power source 2109 may be alternating current, direct current, disposable batteries, or rechargeable batteries. When the power source 2109 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, computer device 2100 also includes one or more sensors 2110. The one or more sensors 2110 include, but are not limited to: acceleration sensor 2111, gyro sensor 2112, pressure sensor 2113, fingerprint sensor 2114, optical sensor 2115, and proximity sensor 2116.
The acceleration sensor 2111 can detect the magnitude of acceleration in three coordinate axes of a coordinate system established with the computer apparatus 2100. For example, the acceleration sensor 2111 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 2101 may control the touch display screen 2105 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 2111. The acceleration sensor 2111 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 2112 may detect a body direction and a rotation angle of the computer device 2100, and the gyro sensor 2112 may cooperate with the acceleration sensor 2111 to acquire a 3D motion of the user on the computer device 2100. The processor 2101 may implement the following functions according to the data collected by the gyro sensor 2112: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 2113 may be disposed on the side bezel of computer device 2100 and/or underneath touch display screen 2105. When the pressure sensor 2113 is provided in the side frame of the computer apparatus 2100, a user's grip signal to the computer apparatus 2100 can be detected, and left-right hand recognition or shortcut operation can be performed based on the grip signal. When the pressure sensor 2113 is disposed at the lower layer of the touch display screen 2105, the operability control on the UI interface can be controlled according to the pressure operation of the user on the touch display screen 2105. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 2114 is used for collecting a fingerprint of the user to identify the identity of the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 2101 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, payment, and changing settings, etc. The fingerprint sensor 2114 may be provided on the front, back, or side of the computer device 2100. When a physical key or vendor Logo is provided on the computer device 2100, the fingerprint sensor 2114 may be integrated with the physical key or vendor Logo.
The optical sensor 2115 is used to collect the ambient light intensity. In one embodiment, processor 2101 may control the display brightness of touch display 2105 based on the ambient light intensity collected by optical sensor 2115. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 2105 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 2105 is turned down. In another embodiment, processor 2101 may also dynamically adjust the shooting parameters of camera head assembly 2106 based on the intensity of ambient light collected by optical sensor 2115.
A proximity sensor 2116, also called a distance sensor, is typically provided on the front side of the computer device 2100. The proximity sensor 2116 is used to capture the distance between the user and the front of the computer device 2100. In one embodiment, the touch display 2105 is controlled by the processor 2101 to switch from a bright screen state to a dark screen state when the proximity sensor 2116 detects that the distance between the user and the front of the computer device 2100 is gradually decreasing; when the proximity sensor 2116 detects that the distance between the user and the front of the computer device 2100 is gradually increasing, the touch display 2105 is controlled by the processor 2101 to switch from a breath-screen state to a bright-screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 21 is not intended to be limiting with respect to the computer device 2100 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
The embodiment of the present application further provides a computer device, where the computer device includes a processor and a memory, where the memory stores at least one instruction, at least one program, a code set, or an instruction set, and the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by the processor to implement the sound effect playing method for a virtual scene provided in the above method embodiments.
The embodiment of the present application further provides a computer-readable storage medium, where at least one instruction, at least one program, a code set, or an instruction set is stored in the storage medium, and the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by a processor to implement the sound effect playing method for a virtual scene provided by the above method embodiments.
The embodiment of the application also provides a computer readable storage medium, wherein a computer program is stored in the storage medium, and the computer program is loaded and executed by a processor to realize the sound effect playing method of the virtual scene provided by the method embodiments.
Embodiments of the present application also provide a computer program product or a computer program, which includes computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and executes the computer instructions, so that the computer device executes the sound effect playing method of the virtual scene as described above.
It should be understood that reference to "a plurality" herein means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (14)

1. A sound effect playing method in a virtual scene is characterized by comprising the following steps:
displaying a virtual scene picture, wherein the virtual scene picture comprises virtual roles, and the virtual roles carry virtual articles;
obtaining attribute parameters of the virtual article in a physical dimension, wherein the attribute parameters of the physical dimension comprise: at least one of mass, quantity, and volume;
and responding to the attribute parameters meeting a first matching relationship, and playing sound effects corresponding to the attribute parameters, wherein the first matching relationship represents the corresponding relationship between the attribute parameters of the virtual article and the sound effects.
2. The method of claim 1, wherein the attribute parameters of the virtual article comprise a quality of the virtual article;
the responding to the attribute parameters meeting the first matching relationship, playing the sound effect corresponding to the attribute parameters, including:
and responding to the fact that the quality magnitude corresponding to the quality of the virtual article meets the first matching relation, and playing a sound effect corresponding to the quality magnitude.
3. The method according to claim 2, wherein the playing the sound effect corresponding to the quality level in response to the quality level corresponding to the quality of the virtual item satisfying the first matching relationship comprises:
responding to the fact that the quality of the virtual article belongs to a first quality magnitude, and playing a first sound effect corresponding to the first quality magnitude according to the first matching relation, wherein the first quality magnitude is the quality magnitude corresponding to the condition that the quality of the virtual article is smaller than or equal to a first quality threshold;
responding to the fact that the quality of the virtual article belongs to a second quality magnitude, and playing a second sound effect corresponding to the second quality magnitude according to the first matching relation, wherein the second quality magnitude is the quality magnitude corresponding to the condition that the quality of the virtual article is larger than or equal to a second quality threshold;
responding to the fact that the quality of the virtual article belongs to a third quality magnitude, and playing a third sound effect corresponding to the third quality magnitude according to the first matching relation, wherein the third quality magnitude is the quality magnitude corresponding to the virtual article when the quality of the virtual article is larger than the first quality threshold and smaller than the second quality threshold;
wherein, the tone quality of the first sound effect, the second sound effect and the third sound effect is different.
4. The method of claim 1, wherein the attribute parameters of the virtual item include a number of the virtual item;
the responding to the attribute parameters meeting the first matching relationship, playing the sound effect corresponding to the attribute parameters, including:
and responding to the number magnitude corresponding to the number of the virtual articles to meet the first matching relation, and playing the sound effect corresponding to the number magnitude.
5. The method according to claim 4, wherein the playing the sound effect corresponding to the quantity magnitude in response to the quantity magnitude corresponding to the virtual item satisfying the first matching relationship comprises:
responding to the fact that the number of the virtual articles belongs to a first number magnitude, and playing a first sound effect corresponding to the first number magnitude according to the first matching relation, wherein the first number magnitude is the number magnitude corresponding to the situation that the number of the virtual articles is smaller than or equal to a first number threshold;
responding to the fact that the number of the virtual articles belongs to a second quantity magnitude, and playing a second sound effect corresponding to the second quantity magnitude according to the first matching relation, wherein the second quantity magnitude is the corresponding quantity magnitude when the number of the virtual articles is larger than or equal to a second quantity threshold;
responding to the fact that the number of the virtual articles belongs to a third number magnitude, and playing a third sound effect corresponding to the third number magnitude according to the first matching relation, wherein the third number magnitude is the number magnitude corresponding to the virtual articles when the number of the virtual articles is larger than the first number threshold and smaller than the second number threshold;
wherein, the tone quality of the first sound effect, the second sound effect and the third sound effect is different.
6. The method of claim 1, wherein the attribute parameter of the virtual item comprises a volume of the virtual item;
the responding to the attribute parameters meeting the first matching relationship, playing the sound effect corresponding to the attribute parameters, including:
and responding to the volume magnitude corresponding to the volume of the virtual article to meet the first matching relation, and playing the sound effect corresponding to the volume magnitude.
7. The method of claim 6, wherein the playing the sound effect corresponding to the volume magnitude in response to the volume magnitude corresponding to the volume of the virtual item satisfying the first matching relationship comprises:
responding to the fact that the volume of the virtual article belongs to a first volume magnitude, playing a first sound effect corresponding to the first volume magnitude according to the first matching relation, wherein the first volume magnitude is the corresponding volume magnitude when the volume of the virtual article is smaller than or equal to a first volume threshold value;
responding to the fact that the volume of the virtual article belongs to a second volume magnitude, playing a second sound effect corresponding to the second volume magnitude according to the first matching relation, wherein the second volume magnitude is the corresponding volume magnitude when the volume of the virtual article is larger than or equal to a second volume threshold;
responding to the fact that the volume of the virtual article belongs to a third volume magnitude, playing a third sound effect corresponding to the third volume magnitude according to the first matching relation, wherein the third volume magnitude is the volume magnitude corresponding to the virtual article when the volume of the virtual article is larger than the first volume threshold and smaller than the second volume threshold;
wherein, the tone quality of the first sound effect, the second sound effect and the third sound effect is different.
8. The method of any of claims 1 to 7, further comprising:
responding to the movement of the virtual character in the virtual scene, and acquiring a second matching relationship corresponding to the movement state, wherein the second matching relationship represents the corresponding relationship between the attribute parameters of the virtual article carried by the virtual character in the movement state and the sound effect, and the attribute parameters are the attribute parameters of physical dimensions;
and playing a sound effect corresponding to the attribute parameter according to the second matching relation and the attribute parameter of the virtual article carried by the virtual character in the motion state.
9. The method of claim 8, wherein the motion state comprises at least one of a walking state, a running state, a jumping state, a squat state, a groveling state, and a sprint state.
10. The method of any one of claims 1 to 7, wherein the first matching relationship is obtained via a first data packet, the first data packet being generated via an audio engine, a conversion function in the audio engine being used to construct the first matching relationship.
11. The method of claim 8, wherein the second matching relationship is obtained via a second packet, the second packet being generated via an audio engine, a transformation function in the audio engine being used to construct the second matching relationship.
12. An audio playback apparatus for a virtual scene, the apparatus comprising:
the display module is used for displaying a virtual scene picture, wherein the virtual scene picture comprises virtual roles, and the virtual roles carry virtual articles;
an obtaining module, configured to obtain attribute parameters of the virtual article in a physical dimension, where the attribute parameters of the physical dimension include: at least one of quantity, mass, and volume;
and the playing module is used for responding to the attribute parameters meeting a first matching relationship and playing the sound effect corresponding to the attribute parameters, wherein the first matching relationship represents the corresponding relationship between the attribute parameters of the virtual article and the sound effect.
13. A computer device characterized in that it comprises a processor and a memory in which at least one instruction, at least one program, set of codes or set of instructions is stored, which is loaded and executed by the processor to implement a sound effect playing method of a virtual scene according to any one of claims 1 to 11.
14. A computer-readable storage medium, in which a computer program is stored, the computer program being loaded and executed by a processor to implement the sound effect playing method of a virtual scene according to any one of claims 1 to 11.
CN202011165714.XA 2020-10-27 2020-10-27 Sound effect playing method, device, equipment and storage medium in virtual scene Active CN112221138B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011165714.XA CN112221138B (en) 2020-10-27 2020-10-27 Sound effect playing method, device, equipment and storage medium in virtual scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011165714.XA CN112221138B (en) 2020-10-27 2020-10-27 Sound effect playing method, device, equipment and storage medium in virtual scene

Publications (2)

Publication Number Publication Date
CN112221138A true CN112221138A (en) 2021-01-15
CN112221138B CN112221138B (en) 2022-09-27

Family

ID=74109089

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011165714.XA Active CN112221138B (en) 2020-10-27 2020-10-27 Sound effect playing method, device, equipment and storage medium in virtual scene

Country Status (1)

Country Link
CN (1) CN112221138B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2082397A2 (en) * 2006-10-16 2009-07-29 Fraunhofer-Gesellschaft zur Förderung der Angewandten Forschung e.V. Apparatus and method for multi -channel parameter transformation
US20100197401A1 (en) * 2009-02-04 2010-08-05 Yaniv Altshuler Reliable, efficient and low cost method for games audio rendering
CN102663245A (en) * 2012-03-30 2012-09-12 福建天趣网络科技有限公司 3D (three-dimensional) game world editor
CN108355356A (en) * 2018-03-14 2018-08-03 网易(杭州)网络有限公司 Scene of game sound intermediate frequency control method for playing back and device
CN109144610A (en) * 2018-08-31 2019-01-04 腾讯科技(深圳)有限公司 Audio frequency playing method, device, electronic device and computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2082397A2 (en) * 2006-10-16 2009-07-29 Fraunhofer-Gesellschaft zur Förderung der Angewandten Forschung e.V. Apparatus and method for multi -channel parameter transformation
US20100197401A1 (en) * 2009-02-04 2010-08-05 Yaniv Altshuler Reliable, efficient and low cost method for games audio rendering
CN102663245A (en) * 2012-03-30 2012-09-12 福建天趣网络科技有限公司 3D (three-dimensional) game world editor
CN108355356A (en) * 2018-03-14 2018-08-03 网易(杭州)网络有限公司 Scene of game sound intermediate frequency control method for playing back and device
CN109144610A (en) * 2018-08-31 2019-01-04 腾讯科技(深圳)有限公司 Audio frequency playing method, device, electronic device and computer readable storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
奇亿音乐: "《CSDN》", 15 August 2020 *
小数点解说: "《百度》", 10 March 2018 *
王鹏威: "《动画分镜头脚本设计研究》", 31 January 2019, 吉林美术出版社 *

Also Published As

Publication number Publication date
CN112221138B (en) 2022-09-27

Similar Documents

Publication Publication Date Title
CN110413171B (en) Method, device, equipment and medium for controlling virtual object to perform shortcut operation
CN110694261B (en) Method, terminal and storage medium for controlling virtual object to attack
CN108888956B (en) Display method, equipment and storage medium of virtual backpack display interface
CN110507993B (en) Method, apparatus, device and medium for controlling virtual object
CN110201403B (en) Method, device and medium for controlling virtual object to discard virtual article
CN110755841B (en) Method, device and equipment for switching props in virtual environment and readable storage medium
CN110270098B (en) Method, device and medium for controlling virtual object to mark virtual object
CN110585710B (en) Interactive property control method, device, terminal and storage medium
CN110721469B (en) Method, terminal and medium for shielding virtual object in virtual environment
CN110917623B (en) Interactive information display method, device, terminal and storage medium
CN111330274B (en) Virtual object control method, device, equipment and storage medium
CN110585695B (en) Method, apparatus, device and medium for using near-war property in virtual environment
CN112138383B (en) Virtual item display method, device, equipment and storage medium
CN111659119B (en) Virtual object control method, device, equipment and storage medium
CN110755844B (en) Skill activation method and device, electronic equipment and storage medium
CN112295215B (en) Virtual prop display method, device, equipment and storage medium
CN112316421A (en) Equipment method, device, terminal and storage medium of virtual prop
CN111249726B (en) Operation method, device, equipment and readable medium of virtual prop in virtual environment
CN113144597A (en) Virtual vehicle display method, device, equipment and storage medium
CN110639205B (en) Operation response method, device, storage medium and terminal
CN112330823A (en) Virtual item display method, device, equipment and readable storage medium
CN111921190A (en) Method, device, terminal and storage medium for equipping props of virtual objects
CN111651616B (en) Multimedia resource generation method, device, equipment and medium
CN112169321B (en) Mode determination method, device, equipment and readable storage medium
CN113730916B (en) Resource loading method, device, equipment and medium based on virtual environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant