CN109857363B - Sound effect playing method and related device - Google Patents

Sound effect playing method and related device Download PDF

Info

Publication number
CN109857363B
CN109857363B CN201910044352.XA CN201910044352A CN109857363B CN 109857363 B CN109857363 B CN 109857363B CN 201910044352 A CN201910044352 A CN 201910044352A CN 109857363 B CN109857363 B CN 109857363B
Authority
CN
China
Prior art keywords
sound effect
target
type
event
identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910044352.XA
Other languages
Chinese (zh)
Other versions
CN109857363A (en
Inventor
周文波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910044352.XA priority Critical patent/CN109857363B/en
Publication of CN109857363A publication Critical patent/CN109857363A/en
Application granted granted Critical
Publication of CN109857363B publication Critical patent/CN109857363B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a sound effect playing method, which comprises the following steps: when the target event type of the audio event is detected, determining a target sound effect identification corresponding to the target event type from a sound effect identification set according to the target event type, wherein the sound effect identification set comprises at least one sound effect identification; acquiring a target type identifier of a sound effect receiver, wherein the target type identifier belongs to one type identifier in a type identifier set; acquiring a path of a target sound effect file according to the target sound effect identification and the target type identification, wherein the path is used for indicating an index of the sound effect file; and playing the sound effect corresponding to the target sound effect file according to the path of the target sound effect file. The invention also discloses a client. The invention can efficiently position the audio file to be played even under the condition that the complexity of the online game is increased, thereby greatly reducing the logic complexity of searching the audio file by the client.

Description

Sound effect playing method and related device
Technical Field
The present invention relates to the field of internet technologies, and in particular, to a method and a related apparatus for playing sound effects.
Background
In the competitive online game, the audio often plays an important role, for example, when a game event is triggered, the character plays some voices which advance the progress of a single game or render atmosphere, so as to improve the achievement of the player, or guide the player to know the game target of the next stage, and in many cases, different voices are played for different camps and teammate relations.
Currently, in network games, multiple different audios need to be configured for the same game event. For example, the role a kills the role B, and at this time, the role a is used as the killing party, and the corresponding client plays the sound effect of "successful killing". And the role B is used as the killed party, and the corresponding client side can play the sound effect of 'regret elimination'.
However, as the complexity of online games increases, for different teams of players, different game environments and different hero characters, personalized audio needs to be configured for the same game event, and therefore, the client needs to find out corresponding sound effects from a large number of audio types to play, which results in very complicated query logic.
Disclosure of Invention
The embodiment of the invention provides a sound effect playing method and a related device, which can efficiently position an audio file to be played even if the complexity of an online game is increased, thereby greatly reducing the logic complexity of searching the audio file by a client.
In view of the above, the first aspect of the present invention provides a method for playing sound effects, including:
when a target event type of an audio event is detected, determining a target sound effect identification corresponding to the target event type from a sound effect identification set according to the target event type, wherein the sound effect identification set comprises at least one sound effect identification, and the sound effect identification and the event type have a one-to-one correspondence relationship;
acquiring a target type identifier of a sound effect receiver, wherein the target type identifier belongs to one type identifier in a type identifier set, and the type identifier is used for representing role types corresponding to different receivers;
acquiring a path of a target sound effect file according to the target sound effect identification and the target type identification, wherein the path is used for indicating an index of the sound effect file, and the path and the type identification have a corresponding relation;
and playing the sound effect corresponding to the target sound effect file according to the path.
A second aspect of the present invention provides a client, including:
the device comprises a determining module, a judging module and a judging module, wherein the determining module is used for determining a target sound effect identification corresponding to a target event type from a sound effect identification set according to the target event type when the target event type of an audio event is detected, the sound effect identification set comprises at least one sound effect identification, and the sound effect identification and the event type have one-to-one correspondence relation;
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a target type identifier of a sound effect receiver, the target type identifier belongs to one type identifier in a type identifier set, and the type identifier is used for representing role types corresponding to different receivers;
the obtaining module is further configured to obtain a path of a target sound effect file according to the target sound effect identifier determined by the determining module and the target type identifier obtained by the obtaining module, where the path is used to indicate an index of the sound effect file, and the path and the type identifier have a corresponding relationship;
and the playing module is used for playing the sound effect corresponding to the target sound effect file according to the path acquired by the acquisition module.
In a possible implementation manner, the client further includes a receiving module;
the receiving module is used for acquiring a sound effect configuration relation before the determining module determines a target sound effect identifier corresponding to the target event type from a sound effect identifier set according to the target event type, wherein the sound effect configuration relation comprises a corresponding relation among a sound effect identifier set, an event type set, a type identifier set and a file path set, the sound effect identifier set comprises a target sound effect identifier, the event type set comprises the target event type, the type identifier set comprises a target type identifier, and the file path set comprises a path of the target sound effect file.
In a possible implementation manner, the receiving module is specifically configured to receive an event type configuration instruction;
generating the event type set according to the event type configuration instruction, wherein the event type set comprises an interaction event type and a stand-alone event type, the interaction event type represents an event type performed between at least two roles, and the stand-alone event type represents an event type executed by one role;
and generating the target sound effect identification according to the type of the target event.
In a possible implementation manner, the receiving module is specifically configured to receive a file path configuration instruction;
and obtaining and generating the type identifier set according to the file path configuration instruction, wherein the type identifier set comprises M file paths, M is an integer greater than or equal to 1, and the M file paths are stored in at least one storage module.
In a possible implementation manner, the receiving module is specifically configured to receive a receiver configuration instruction;
generating at least one of a first type identifier, a second type identifier, a third type identifier, a fourth type identifier and a fifth type identifier according to the receiver configuration instruction;
wherein the first type identification indicates that the sound effect receiving party is an event trigger;
the second type identification indicates that the sound effect receiving party is the event trigger and at least one member in the team, wherein the event trigger and the at least one member in the team both belong to a first team;
the third type identification indicates that the sound effect receiving party is the at least one member in the team;
the fourth type identification indicates that the sound effect receiving party is at least one member outside the team, the at least one member outside the team belongs to a second team, and the second team is a different team from the first team;
the fifth type identification indicates that the sound effect receiving party is the at least one in-team member and the at least one out-of-team member.
In a possible implementation manner, the obtaining module is specifically configured to obtain a first type identifier corresponding to the sound effect receiving party if the sound effect receiving party is an event trigger party;
acquiring a first sound effect file path of a target sound effect file according to the target sound effect identification and the first type identification;
the playing module is specifically configured to play the sound effect corresponding to the target sound effect file according to the first sound effect file path acquired by the acquisition module.
In a possible implementation manner, the obtaining module is specifically configured to obtain a second type identifier corresponding to the sound effect receiver if the sound effect receiver is a member in a team;
acquiring a second sound effect file path of the target sound effect file according to the target sound effect identification and the second type identification;
the playing module is specifically configured to play the sound effect corresponding to the target sound effect file according to the second sound effect file path acquired by the acquisition module.
In a possible implementation manner, the obtaining module is specifically configured to obtain a third type identifier corresponding to the sound effect receiver if the sound effect receiver is an extrateam member;
acquiring a third sound effect file path of the target sound effect file according to the target sound effect identification and the third type identification;
the playing module is specifically configured to play the sound effect corresponding to the target sound effect file according to the third sound effect file path acquired by the acquisition module.
In one possible implementation, the client further includes a rendering module;
the determining module is further configured to determine, when a target event type of an audio event is detected, a target video identifier corresponding to the target event type from a video identifier set according to the target event type, where the video identifier set includes at least one video identifier, and there is a one-to-one correspondence relationship between the video identifier and the event type;
the acquisition module is further configured to acquire a target type identifier of a video receiver, where the target type identifier is used to represent a relationship of the sound effect receiver in a team;
the obtaining module is further configured to obtain a target video file path of a target video file according to the target video identifier determined by the determining module and the target type identifier obtained by the obtaining module, where the target video file path is used to indicate a position of the target video file;
and the rendering module is used for rendering the animation effect corresponding to the target video file according to the path of the target video file acquired by the acquisition module.
A third aspect of the present application provides a terminal device, comprising: a memory, a transceiver, a processor, and a bus system;
wherein the memory is used for storing programs;
the processor is used for executing the program in the memory and comprises the following steps:
when a target event type of an audio event is detected, determining a target sound effect identification corresponding to the target event type from a sound effect identification set according to the target event type, wherein the sound effect identification set comprises at least one sound effect identification, and the sound effect identification and the event type have a one-to-one correspondence relationship;
acquiring a target type identifier of a sound effect receiver, wherein the target type identifier belongs to one type identifier in a type identifier set, and the type identifier is used for representing role types corresponding to different receivers;
acquiring a path of a target sound effect file according to the target sound effect identification and the target type identification, wherein the path is used for indicating an index of the sound effect file, and the path and the type identification have a corresponding relation;
playing the sound effect corresponding to the target sound effect file according to the path;
the bus system is used for connecting the memory and the processor so as to enable the memory and the processor to communicate.
A fourth aspect of the present invention provides a computer-readable storage medium having stored therein instructions, which, when run on a computer, cause the computer to perform the method of the above-described aspects.
According to the technical scheme, the embodiment of the invention has the following advantages:
in the embodiment of the invention, a method for playing sound effect is provided, when a target event type of an audio event is detected, the client can determine a target sound effect identification corresponding to the target event type from the sound effect identification set according to the target event type, wherein, the sound effect identification set comprises at least one sound effect identification, the sound effect identification and the event type have one-to-one correspondence, then the client acquires the target type identification of the sound effect receiver, wherein, the target type identification is used for representing the relation of the sound effect receiving party in the team, the sound effect receiving party is an object to be played with the sound effect, and then the path of the target sound effect file is obtained according to the target sound effect identification and the target type identification, the path is used for indicating the index of the sound effect file, the path and the type identification have a corresponding relation, and finally the client plays the sound effect corresponding to the target sound effect file according to the path. Through the method, the client can find the small event set from the large event set according to the event type when playing the audio, directly find the path of the audio file from the small event set according to the type of the audio receiver, and play the audio file through the path.
Drawings
FIG. 1 is a schematic diagram of an architecture of a sound effect playing system according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of sound effect playing according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an embodiment of a method for audio playback according to the present invention;
FIG. 4 is a diagram of an embodiment in which the sound effect receiver is an event trigger;
FIG. 5 is a diagram of an embodiment of an audio effect receiver being an event trigger and a team member according to the present invention;
FIG. 6 is a diagram of an embodiment of the present invention in which the sound effect receiver is a member in the team;
FIG. 7 is a diagram of an embodiment of the present invention in which the sound effect receiver is an out-of-team member;
FIG. 8 is a diagram of an embodiment of the present invention in which the sound effect receiver is an arbitrary member;
FIG. 9 is a diagram of an embodiment of a client according to the present invention;
FIG. 10 is a schematic diagram of another embodiment of the client according to the embodiment of the present invention;
FIG. 11 is a diagram of another embodiment of a client according to the embodiment of the present invention;
fig. 12 is a schematic structural diagram of a terminal device in the embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a sound effect playing method and a related device, which can efficiently position an audio file to be played even if the complexity of an online game is increased, thereby greatly reducing the logic complexity of searching the audio file by a client.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that the present invention can be applied to an interactive application, which can be a network game in which players must play a multiplayer game through an internet connection, and generally refers to a game product set in which multiple players operate their character and scene according to certain rules in a virtual environment through a computer network to achieve the purposes of entertainment and interaction. Specifically, the network game includes, but is not limited to, an action game, an adventure game, an intelligence-developing game, a fighting game, a sports game, a strategy game, a shooting game, a role-playing game, a racing game, a massively multiplayer online role-playing game, and an action role-playing game. The present invention will be described by taking a first-person shooter game (FPS) as an example.
The FPS plays a shooting game with a subjective view of the player. The FPS cannot see the operated character most of the time, but plays the game with the visual sense of the player (the controlled character), and has a strong degree of insertion and a strong sense of carry-in. Such games typically require separate controls for character movement and shooting of the sight and a rich weapon selection. The main viewpoint shooting game is mostly based on the reality or science fiction, and represents a violent battle scene (or local battle environment).
Currently, in a single-play competitive online game, the audio also plays an important role, for example, when a game event is triggered, the role plays some voices for promoting the single-play progress or rendering atmosphere to improve the player's sense of achievement or guide the game target at the next stage, and in many cases, different voices need to be played for different campsite and teammate relationships. For easy understanding, the present invention provides a method for playing sound effect, which is applied to the sound effect playing system shown in fig. 1, please refer to fig. 1, where fig. 1 is a schematic structural diagram of the sound effect playing system in an embodiment of the present invention, as shown in the figure, a plurality of players control characters in a game through different clients to fight, it is assumed that the fighting is divided into two camps, there are five characters in the camps a, which are respectively the character 1, the character 2, the character 3, the character 4, and the character 5, and there are five characters in the camps B, which are respectively the character 6, the character 7, the character 8, the character 9, and the character 10. The client 1 controls the role 1 in the A formation to launch an attack instruction, namely the client 1 sends the attack instruction to the server, the attack instruction carries an identifier of an attacker, namely the role 1, and an identifier of an attacked, namely the role 8, the server determines that the attack is the attack of the role 1 to the role 2 according to the attack instruction, then the server determines whether the attack is successful according to an attack value of the role 1 and a blood volume value of the role 8, and if the attack value of the role 1 is greater than or equal to the blood volume value of the role 8, the role 1 is indicated to successfully kill the role 8. And the server sends an event that the role 1 clicks and kills the role B to the client sides corresponding to all the roles in the two camps, and at the moment, the client side 1 of the role 1 in the camp A is controlled to play the sound effect No. 1 according to the audio playing configuration table. And the client 2 for controlling the role 2 in the A lineup, the client 3 for controlling the role 3, the client 4 for controlling the role 4 and the client 5 for controlling the role 5 play the sound effect No. 2 according to the audio play configuration table. And the client 6 for controlling the role 6 in the B-burst, the client 7 for controlling the role 7, the client 8 for controlling the role 8, the client 9 for controlling the role 9 and the client 10 for controlling the role 10 play the sound effect No. 3 according to the audio play configuration table. By configuring the audio playing configuration table, a game planner can be enabled to play different voices relative to different roles and different team relations when the same game event occurs.
It should be noted that the client is disposed on a terminal device, where the terminal device includes but is not limited to a tablet computer, a notebook computer, a palmtop computer, a mobile phone, and a Personal Computer (PC), and is not limited herein.
It should be noted that, the server belongs to a professional game server adopted by a network game operator, and the server can enable a network game player to store and change the attribute and data of the network game (such as level, attack power, defense power and the like) during entertainment, but because the terminal of the network game is not local, the network game can run normally only by depending on the internet. The single-machine game has a local game server, namely, the attributes and data of the single-machine game are stored and changed by the local game server, so that the single-machine game can normally run without depending on the Internet.
Referring to fig. 2, fig. 2 is a flow chart illustrating sound effect playing according to an embodiment of the present invention, as shown in the drawing, a certain character makes a sound in step S1, and then a sound effect identifier corresponding to a corresponding audio event may be obtained in step S2. In step S3, the corresponding sound effect is obtained according to the type of the character in the team, and finally the sound effect is played according to the type in step S4.
With reference to the above description, the following describes a method for playing sound effects in the present invention from the perspective of a client, and referring to fig. 3, an embodiment of the method for playing sound effects in the embodiment of the present invention includes:
101. when a target event type of an audio event is detected, determining a target sound effect identification corresponding to the target event type from a sound effect identification set according to the target event type, wherein the sound effect identification set comprises at least one sound effect identification, and the sound effect identification and the event type have a one-to-one corresponding relation;
in this embodiment, when the server detects an audio event Identifier (ID), the server may send the audio event ID to the client, and the client may determine the target event type of the audio event according to the audio event ID. Optionally, when the server detects an audio event, the server may automatically generate an event type ID of the audio event, and the server may directly send the event type ID to the client, and the client may obtain the target event type of the audio event according to the event type ID.
The client acquires a target sound effect ID corresponding to the target event type from the audio ID set according to the target event type, and it should be noted that the sound effect ID set includes at least one sound effect ID, and the sound effect ID and the event type have a one-to-one correspondence relationship.
For example, the audio event is that a character 1 from camp a kills a character 2 from camp B, the target event type of the audio event is- "kill", and the client determines that the corresponding target sound effect ID is 5123704 from the audio play configuration table according to the "kill" type of event.
102. Acquiring a target type identifier of a sound effect receiver, wherein the target type identifier belongs to one type identifier in a type identifier set, and the type identifiers are used for representing role types corresponding to different receivers;
in this embodiment, after acquiring the target sound effect identifier, the client may find the type ID set corresponding to the target sound effect identifier from the audio playing configuration table, and the client further needs to acquire the target type ID of the sound effect receiver from the type ID set, where the sound effect receiver is a role of the sound effect to be played. Taking two avails as an example, assuming that role 1 of avail a kills role 2 of avail B, then role 1 is the event trigger and can be used as a "principal role", role 2 is an "enemy" relative to role 1, and other roles in avail a are "teammates" as well.
Therefore, the server or the client determines which role is the event trigger according to the target event type, and further determines the relationship among the roles in different teams and the relationship among the different teams according to the event trigger. Through these relationships, the client can obtain the target type ID of the sound effect receiving party, where the sound effect receiving party is the object to be played.
103. Acquiring a path of a target sound effect file according to the target sound effect identification and the target type identification, wherein the path is used for indicating an index of the sound effect file, and the path and the type identification have a corresponding relation;
in this embodiment, the client locates a path from the audio playing configuration table according to the target sound effect ID and the target type ID, where the target sound effect file in the path is the sound effect file to be played, and in general, each type identifier corresponds to one path, and optionally, different type identifiers may also correspond to the same path.
For the convenience of understanding, a simplified audio playing configuration table will be described as an example, and in practical applications, the audio playing configuration table may also be set as other parameters, which is only an illustration here and should not be construed as a limitation to the present invention. Referring to table 1, table 1 is a simplified audio playback configuration table.
TABLE 1
Figure BDA0001948655670000071
Firstly, determining the event as a killing event according to the target event type, then finding out an audio effect ID-5123704 corresponding to the killing event from an audio playing configuration table, and then determining that the role 1 is the killing role 2 according to the event trigger, and the current client is the client controlling the role 1, so that the target type ID of the client is 1, and thus, the path is positioned as event/operator/hawk/vocal/stick _2_ 1.
104. And playing the sound effect corresponding to the target sound effect file according to the path of the target sound effect file.
In this embodiment, the client may obtain the target sound effect file locally according to the path, and may also download the target sound effect file from the server, where the client plays the sound effect corresponding to the target sound effect file.
It is understood that sound effects refer to effects produced by sound, i.e., noise or sound added to the soundtrack to enhance the realism, atmosphere or dramatic message of a scene. The sound includes musical tones and sound effects. Including digital, ambient, ordinary, and professional sound effects. Sound effects are artificially created or enhanced sounds used to enhance the sound processing of art or other content of movies, video games, music, or other media.
In the embodiment of the invention, a method for playing sound effect is provided, when a target event type of an audio event is detected, the client can determine a target sound effect identification corresponding to the target event type from the sound effect identification set according to the target event type, wherein, the sound effect identification set comprises at least one sound effect identification, the sound effect identification and the event type have one-to-one correspondence, then the client acquires the target type identification of the sound effect receiver, wherein, the target type identification is used for representing the relation of the sound effect receiving party in the team, the sound effect receiving party is an object to be played with the sound effect, and then the path of the target sound effect file is obtained according to the target sound effect identification and the target type identification, the path is used for indicating the index of the sound effect file, the path and the type identification have a corresponding relation, and finally the client plays the sound effect corresponding to the target sound effect file according to the path. Through the method, the client can find the small event set from the large event set according to the event type when playing the audio, directly find the path of the audio file from the small event set according to the type of the audio receiver, and play the audio file through the path.
Optionally, on the basis of the embodiment corresponding to fig. 3, in a first optional embodiment of the method for sound effect playing provided by the embodiment of the present invention, before determining, according to the target event type, a target sound effect identifier corresponding to the target event type from the sound effect identifier set, the method may further include:
the method comprises the steps of obtaining a sound effect configuration relation, wherein the sound effect configuration relation comprises a corresponding relation among a sound effect identification set, an event type set, a type identification set and a file path set, the sound effect identification set comprises a target sound effect identification, the event type set comprises a target event type, the type identification set comprises a target type identification, and the file path set comprises a path of a target sound effect file.
In this embodiment, the user may also pre-configure a sound effect configuration relationship, where the sound effect configuration relationship may be specifically expressed as an audio playing configuration table, and the configuration content mainly includes a sound effect ID, an event type, a type ID, and a file path. Specifically, each sound effect ID corresponds to an event type, and assuming that the target sound effect ID is 5123704, the target sound effect ID represents a killing type event, which generally refers to an event that an attacker kills an opponent if the attack value is greater than or equal to the opponent blood value. Each event type is abstracted into a sound effect ID after configuration. The file path refers to a folder line called path that a client passes through when searching for a file on a disk, and it should be noted that the M file paths may be paths stored locally at the client or paths stored on a server. Wherein, the M file paths have corresponding relations with the sound effect ID. The type identifier is used to indicate the team relationship of characters, and in the multiplayer game, each character is divided from other characters in a team, such as teammates, enemies or the like. By associating at least one type identifier with the M file paths, different roles can be made to find corresponding file paths.
For ease of understanding, please refer to table 2, where table 2 is an illustration of a user configured audio playback configuration table.
TABLE 2
Figure BDA0001948655670000081
As shown in table 2, if the current audio event is the shouting of a person when the tweet is released, the corresponding event types include, but are not limited to, the shouting heard by himself/herself, the shouting heard by an enemy, the shouting heard by an teammate, and the shouting at a special event (here, for example, successful triggering of passive skills). All event types in table 2 are written in the same sound effect ID, and when the trigger condition is reached, the sound effect ID is called and the four different sound effects (each sound effect corresponds to a different sound effect file path) are played. A sound effect (GUID) is a digital Identifier with a binary length of 128 bits generated by an algorithm, and is mainly applied to a network or a system having a plurality of nodes and a plurality of computers. Ideally, no computer or cluster of computers will generate two identical GUIDs, identifying objects such as registry entries, classes and interface identifiers, databases and system directories, etc. by means of sound GUIDs.
The type ID is the type of the sound effect receiver and corresponds to the event type in the first column, the event type can be regarded as the literal description of the type ID, each type ID defines different receiving types, and when the same sound effect ID is called, the roles of different receivers receive the sound effect corresponding to the type of the roles. Wherein, a type ID of 1 may indicate that only oneself can hear, a type ID of 2 may indicate that both teammates and oneself can hear, a type ID of 3 may indicate that only teammates can hear, a type ID of 4 may indicate that only enemies can hear, and a type ID of 0 may indicate that any character can hear. It should be noted that the above example is merely an illustration, and in practical applications, the configuration may be performed according to the situation. Therefore, by the mode, the same audio event can be at the same time point, and different sound effects can be heard by different roles in different camps.
Secondly, in the embodiment of the present invention, a way of configuring an audio playing configuration table is introduced, that is, a sound effect configuration relationship needs to be obtained, where the sound effect configuration relationship includes a correspondence relationship between a sound effect identifier set, an event type set, a type identifier set, and a file path set. Through the mode, the user can flexibly configure the audio playing configuration table according to the applied scenes of different application types, so that the sound effect is close to the actual scene. In addition, the corresponding relation between the information in the audio playing configuration table can be added or deleted in the configuration process, and therefore the practicability and operability of the scheme are improved.
Optionally, on the basis of the first embodiment corresponding to fig. 3, in a second optional embodiment of the method for providing sound effect playing according to the embodiment of the present invention, acquiring the sound effect configuration relationship may include:
receiving an event type configuration instruction;
and generating an event type set according to the event type configuration instruction, wherein the event type set comprises an interactive event type and a stand-alone event type, the interactive event type represents an event type performed between at least two roles, and the stand-alone event type represents an event type executed by one role.
In this embodiment, how to configure an event type is described as an example, specifically, first, a user triggers an event type configuration instruction, where the event type configuration instruction carries an event type ID, and a client may obtain a target event type to be configured next according to the event type configuration instruction, and several common event types are described below. The target event types in the invention comprise two types, wherein the first type is an interactive event type, the event of the type usually needs at least two roles for interaction, and the second type is a single-machine event type, the event of the type can be completed by one role usually.
Interaction event types include, but are not limited to, a "click" type, a "cynicism" type, a "pop" type, and a "group war" type, where a "click" type event in an FPS may refer to an event when one character attacks another character such that the other character's blood volume value is 0. A "cynical" type event in the FPS may indicate that one character deliberately irritates the other character, causing the other character to attack itself. In PFS, a "pop" type event may represent an event in which one character attacks the head of another character, so that the other character dies, and in general, the "pop" is considered to be a high-tech and eye level expression because the head target is small and the enemy's head is particularly effective than the body. The "group fight" type event in the FPS may represent an event in which a plurality of characters in units of battle, team or group play act against collectively.
The stand-alone event types include, but are not limited to, a "release skill" type, a "switch weapon" type, a "pop up" type, and a "paint spray" type. A "release skills" class event in the FPS may indicate that a character has triggered a mastered skill event, such as a physical or legal skill. A "switch weapon" type event in the FPS may represent an event in which a character switches weapon A to weapon B, which it would otherwise use. A "pop" type event in the FPS may represent an event where a character loads a manual rifle. The "painting" type event in the FPS may indicate an event that a certain character paints on a designated position according to a designed painting pattern.
In the embodiment of the present invention, how to configure event types in the audio playback configuration table is described, that is, the client receives an event type configuration instruction, and then may generate an event type set according to the event type configuration instruction, where the event type set includes an interaction event type and a stand-alone event type, the interaction event type represents an event type performed between at least two roles, and the stand-alone event type represents an event type performed by one role. Through the mode, the configured event type can be an interactive event or a single-machine time, so that the diversity of the event type is improved, and the event layout in a network game scene is better met.
Optionally, on the basis of the first embodiment corresponding to fig. 3, in a third optional embodiment of the method for providing sound effect playing according to the embodiment of the present invention, acquiring the sound effect configuration relationship may include:
receiving a file path configuration instruction;
and generating a type identifier set according to the file path configuration instruction, wherein the type identifier set comprises M file paths, M is an integer greater than or equal to 1, and the M file paths are stored in at least one storage module.
In this embodiment, how to configure M file paths will be described as an example.
Specifically, firstly, a user triggers a file path configuration instruction, the file path configuration instruction carries IDs of M file paths, and a client can acquire the M file paths to be configured next according to the file path configuration instruction. It should be noted that the M file paths may be paths indicating the local side of the client or paths indicating the server side.
The path local to the client may indicate the location of the internal memory of the terminal device. In the composition of the terminal equipment, there is an important part, namely, a memory. The memory is a component used for storing programs and data, and for the terminal equipment, the memory function is provided only when the memory is provided, so that the normal work can be ensured. The memory is divided into a main memory and a secondary memory according to its usage, and the main memory is also called an internal memory (abbreviated as "ram"). The Memory generally employs semiconductor Memory cells, including but not limited to Random Access Memory (RAM), Read Only Memory (ROM), and cache. At the time of manufacturing the ROM, information (data or program) is stored and permanently stored. The information can only be read out, but generally can not be written in, and the data can not be lost even if the terminal equipment is powered off. RAM means that data can be read from or written to it. When the terminal device is powered off, the data stored therein is lost. The cache is located between a Central Processing Unit (CPU) and a memory, and is a memory with a faster read-write speed than the memory. When the CPU writes or reads data to or from the memory, this data is also stored in the cache memory. When the CPU needs the data again, the CPU reads the data from the cache instead of accessing the slower memory, and of course, if the needed data is not in the cache, the CPU will read the data in the memory again.
The path of the server may indicate the location of an internal memory of the server, or may be a storage location on the cloud server, and it is understood that the concept of cloud storage is similar to that of cloud computing, which means that a large number of storage devices of various types in a network are integrated and cooperatively work through application software through functions such as cluster application, a grid technology, or a distributed file system, and the like, and a system which provides data storage and service access functions to the outside jointly guarantees the security of data and saves storage space. Briefly, cloud storage is an emerging solution for putting storage resources on the cloud for clients to access. The user can conveniently access data at any time and any place through connecting to the cloud through any internet-connected device.
It will be appreciated that files may be stored in the above mentioned storage means, and that the file path is used to indicate where in the memory the files are specifically stored.
In the embodiment of the present invention, how to configure the target event type in the audio playback configuration table is described, that is, the client receives the receiver configuration instruction first, and then obtains M file paths according to the file path configuration instruction, where M is an integer greater than or equal to 1, and the M file paths are stored in at least one storage module. Through the method, the user can set a specific file path for the sound effect file, the file path can be a local address of the client side and can also be an internal address of the server, and if the file path is stored in the address of the client side, the sound effect file to be played can be found in an offline state. If the sound effect file is stored in the address of the server, the server has larger storage capacity, so that more sound effect files can be stored. Therefore, the practicability and flexibility of the scheme are improved.
Optionally, on the basis of the first embodiment corresponding to fig. 3, in a fourth optional embodiment of the method for providing sound effect playing according to the embodiment of the present invention, obtaining the sound effect configuration relationship may include:
receiving a configuration instruction of a receiver;
generating a type identifier set according to the configuration instruction of the receiver, wherein the type identifier set comprises at least one of a first type identifier, a second type identifier, a third type identifier, a fourth type identifier and a fifth type identifier;
wherein the first type identification indicates that the sound effect receiving party is an event trigger;
the second type identification indicates that the sound effect receiving party is an event trigger and at least one member in the team, wherein the event trigger and the at least one member in the team belong to the first team;
the third type identification indicates that the sound effect receiving party is at least one member in the team;
the fourth type identification indicates that the sound effect receiving party is at least one member outside the team, the at least one member outside the team belongs to a second team, and the second team is a different team from the first team;
the fifth type identification indicates that the sound effect recipient is at least one in-team member and at least one out-of-team member.
In this embodiment, how to configure the five common types of type identifiers is taken as an example for description, and it can be understood that the configuration modes of other types of identifiers in the audio playing configuration table are also the types of modes for configuring the five types of type identifiers, and therefore, details are not described here.
Specifically, firstly, a user triggers a receiver configuration instruction, the receiver configuration instruction carries at least one type ID, and a client can acquire at least one type ID to be configured next according to the receiver configuration instruction. The contents indicated by the five common type IDs will be described with reference to fig. 4 to 8.
For convenience of understanding, please refer to fig. 4, where fig. 4 is an exemplary illustration of an embodiment of the present invention in which the sound receiver is the event trigger, as shown in the figure, a role a and a role B belong to a team 1, a role C and a role D belong to a team 2, and if the role a kills the role C, the role a is the event trigger, and at this time, only the role a is the sound receiver, that is, the client controlling the role a plays corresponding sound, for example, the played sound may be "beautiful".
The second type identifier indicates that the sound effect receiving party is an event trigger and at least one member in the team, where the event trigger and the at least one member in the team both belong to a first team, and for convenience of understanding, please refer to fig. 5, fig. 5 is an implementation example intention of the sound effect receiving party being the event trigger and the member in the team in the embodiment of the present invention, as shown in the figure, role a and role B belong to team 1, role C and role D belong to team 2, and if role a kills role C, role a is the event trigger, at this time, role a and role B are both sound effect receiving parties, that is, both the client controlling role a and the client controlling role B play corresponding sound effects synchronously, for example, the played sound effect may be "beautiful".
For convenience of understanding, please refer to fig. 6, where fig. 6 is an exemplary illustration of an embodiment of the present invention in which the sound effect receiver is a member in a team, as shown in the figure, a role a and a role B belong to a team 1, a role C and a role D belong to a team 2, and if the role a kills the role C, the role a is an event trigger, and at this time, only the role B is the sound effect receiver, that is, a client controlling the role B plays a corresponding sound effect, for example, the played sound effect may be "beautiful".
The fourth type identifier indicates that the sound effect receiving party is at least one extrateam member, the at least one extrateam member belongs to a second team, the second team is a different team from the first team, for easy understanding, referring to fig. 7, fig. 7 is an exemplary illustration of an embodiment of the invention in which the sound effect receiving party is an extrateam member, as shown, a role a and a role B belong to a team 1, a role C and a role D belong to a team 2, and if the role a kills the role C, the role a is an event trigger, at this time, the role C and the role D are sound effects of the receiving party, that is, the client controlling the role C and the client controlling the role D play corresponding sound effects synchronously, for example, the played sound effects may be "do not abandon, take turns to earth".
Referring to fig. 8, fig. 8 is an embodiment of the present invention, where the sound effect receiver is any member, as shown in the figure, a role a and a role B belong to a team 1, a role C and a role D belong to a team 2, and if the role a kills the role C, the role a is an event trigger, at this time, the role a, the role B, the role C, and the role D are all sound effect receivers, that is, a client controlling the role a, a client controlling the role B, a client controlling the role C, and a client controlling the role D play corresponding sound effects synchronously, for example, the played sound effect may be "people in the field have been defeated".
In the embodiment of the present invention, how to configure the type identifier in the audio playing configuration table is described, that is, the client generates at least one of a first type identifier, a second type identifier, a third type identifier, a fourth type identifier, and a fifth type identifier according to the receiver configuration instruction, where the first type identifier indicates that the sound receiver is an event trigger, the second type identifier indicates that the sound receiver is an event trigger and at least one in-team member, the third type identifier indicates that the sound receiver is at least one in-team member, the fourth type identifier indicates that the sound receiver is at least one out-team member, and the fifth type identifier indicates that the sound receiver is at least one in-team member and at least one out-team member. Through the mode, the user can flexibly configure different identifications for different role positioning, so that the diversity and the flexibility of the scheme are improved.
Optionally, on the basis of any one of the first to fourth embodiments corresponding to fig. 3 and fig. 3, in a fifth optional embodiment of the method for audio effect playing according to the embodiment of the present invention, acquiring the target type identifier of the audio effect receiver may include:
if the sound effect receiving party is an event trigger party, acquiring a first type identification corresponding to the sound effect receiving party;
obtaining a path of the target sound effect file according to the target sound effect identifier and the target type identifier may include:
acquiring a first sound effect file path of a target sound effect file according to the target sound effect identification and the first type identification;
playing the sound effect corresponding to the target sound effect file according to the path may include:
and playing the sound effect corresponding to the target sound effect file according to the first sound effect file path.
In this embodiment, a sound effect playing mode when the role controlled by the client is the event trigger will be described. Specifically, after acquiring the target sound effect identifier, the client may find a type ID set corresponding to the bar sound effect identifier from the audio playing configuration table, and if the sound effect receiver is the event trigger, the client controlling the sound effect receiver acquires a first type ID from the type ID set, where the first type ID is an ID used for indicating that the sound effect receiver is the event trigger. The client firstly finds out the related information belonging to the type of the target event according to the target sound effect ID, then positions the related information to a first sound effect file path through the first type ID, finally finds out the corresponding target sound effect file through the first sound effect file path, and the client of the control event trigger plays the sound effect corresponding to the target sound effect file.
And only one audio ID is triggered by one audio event in the game, and the audio event is mapped to a corresponding sound effect file which should be played actually according to the sound effect ID and the team relationship of the sound effect receiver during playing. For convenience of introduction, please refer to table 3, where table 3 is an illustration of an audio playing configuration table, different audio files corresponding to the same audio ID, and a type ID is an audio ID of an audio receiver, and these two have a one-to-one correspondence relationship. When the type ID of the sound effect receiver is triggered, the corresponding sound effect file is called.
TABLE 3
Figure BDA0001948655670000131
Role A and role B belong to team 1, role C and role D belong to team 2, assume that role A carries out a certain event (such as release of a large move) at the moment, the sound effect ID is triggered, namely, role A is an event trigger, the client of role A is controlled to play the sound effect file with the first type ID of 1, the first sound effect file path is obtained according to the sound effect ID and the first type identification, namely, the client receives the sound effect of 'kill _2_ 1'. The client controlling the character B can play the sound effect file with the type ID of 3, that is, the client receives the sound effect of "kill _2_ 2". The clients controlling the character C and the character D can play the sound effect file with the type ID of 4, namely, the two clients receive the sound effect of "kill _2_ 3". The clients controlling the role a, the role B, the role C and the role D can respectively play the sound effect file with the type ID of 0, that is, the four clients receive the sound effect of "kill _ 3". The priority of the playing can be configured in advance, for example, the client controlling the role a plays the sound effect of "kill _2_ 1" first, and then plays the sound effect of "kill _ 3".
As can be seen from the above, the implementation corresponds to the following information:
Figure BDA0001948655670000132
further, in the embodiment of the present invention, if the sound effect receiving party is the event trigger party, the first type identifier corresponding to the sound effect receiving party is obtained, then the first sound effect file path of the target sound effect file is obtained according to the target sound effect identifier and the first type identifier, and finally, the client plays the sound effect corresponding to the target sound effect file according to the first sound effect file path. By the above mode, when the role controlled by the client belongs to the event trigger side, the sound effect corresponding to the role setting can be played, so that the flexibility and diversity of the scheme are improved.
Optionally, on the basis of any one of the first to fourth embodiments corresponding to fig. 3 and fig. 3, in a sixth optional embodiment of the method for sound effect playing according to the embodiment of the present invention, acquiring the target type identifier of the sound effect receiver includes:
if the sound effect receiver is an in-team member, acquiring a second type identifier corresponding to the sound effect receiver;
obtaining a path of a target sound effect file according to the target sound effect identifier and the target type identifier, wherein the path comprises the following steps:
acquiring a second sound effect file path of the target sound effect file according to the target sound effect identification and the second type identification;
playing the sound effect corresponding to the target sound effect file according to the path, which comprises the following steps:
and playing the sound effect corresponding to the target sound effect file according to the second sound effect file path.
In this embodiment, a sound effect playing mode when the role controlled by the client is a member (teammate) in a team will be described. Specifically, after acquiring the target sound effect identifier, the client may find a type ID set corresponding to the bar sound effect identifier from the audio playing configuration table, and if the sound effect receiver is an in-team member, the client controlling the sound effect receiver may acquire a second type ID from the type ID set, where the second type ID is an ID used to indicate that the sound effect receiver is an in-team member. The client firstly finds the related information belonging to the target event type according to the target sound effect ID, then locates to a second sound effect file path from the related information through the second type ID, finally finds the corresponding target sound effect file through the second sound effect file path, and the client of the member in the control team plays the sound effect corresponding to the target sound effect file.
And only one audio ID is triggered by one audio event in the game, and the audio event is mapped to a corresponding sound effect file which should be played actually according to the sound effect ID and the team relationship of the sound effect receiver during playing. For convenience of introduction, please refer to table 4, where table 4 is an illustration of an audio playing configuration table, different audio files corresponding to the same audio ID, and a type ID is an audio ID of an audio receiver, and these two have a one-to-one correspondence relationship. When the type ID of the sound effect receiver is triggered, the corresponding sound effect file is called.
TABLE 4
Figure BDA0001948655670000141
Role A and role B belong to team 1, role C and role D belong to team 2, assume that role A carries out a certain event (such as release of a large move) at the moment, the sound effect ID is triggered, namely, role A is an event trigger, the client of role B is controlled to play the sound effect file with the second type ID of 3, the second sound effect file path is obtained according to the sound effect ID and the second type identifier, namely, the client receives the sound effect of 'kill _2_ 2'. The client controlling the character a can play the sound effect file with the type ID of 1, that is, the client receives the sound effect of "kill _2_ 1". The clients controlling the character C and the character D can play the sound effect file with the type ID of 4, namely, the two clients receive the sound effect of "kill _2_ 3". The clients controlling the role a, the role B, the role C and the role D can respectively play the sound effect file with the type ID of 0, that is, the four clients receive the sound effect of "kill _ 3". Wherein the priority of playing can be pre-configured.
As can be seen from the above, the implementation corresponds to the following information:
Figure BDA0001948655670000142
further, in the embodiment of the present invention, if the sound effect receiving party is an in-team member, the second type identifier corresponding to the sound effect receiving party is obtained, then the second sound effect file path of the target sound effect file is obtained according to the target sound effect identifier and the second type identifier, and finally, the client plays the sound effect corresponding to the target sound effect file according to the second sound effect file path. Through the mode, when the role controlled by the client belongs to members in a team, the sound effect corresponding to the role setting can be played, and therefore the flexibility and diversity of the scheme are improved.
Optionally, on the basis of any one of the first to fourth embodiments corresponding to fig. 3 and fig. 3, in a seventh optional embodiment of the method for providing sound effect playing according to the embodiment of the present invention, generating at least one type identifier according to the receiver configuration instruction may include:
if the sound effect receiver is an out-team member, acquiring a third type identifier corresponding to the sound effect receiver;
obtaining a path of the target sound effect file according to the target sound effect identifier and the target type identifier may include:
acquiring a third sound effect file path of the target sound effect file according to the target sound effect identification and the third type identification;
playing the sound effect corresponding to the target sound effect file according to the path may include:
and playing the sound effect corresponding to the target sound effect file according to the third sound effect file path.
In this embodiment, a sound effect playing mode when the role controlled by the client is an out-of-team member (enemy) will be described. Specifically, after acquiring the target sound effect identifier, the client may find a type ID set corresponding to the bar sound effect identifier from the audio playing configuration table, and if the sound effect receiver is an out-of-team member, the client controlling the sound effect receiver acquires a third type ID from the type ID set, where the third type ID is an ID used for indicating that the sound effect receiver is an out-of-team member. The client firstly finds the related information belonging to the target event type according to the target sound effect ID, then locates to a third sound effect file path from the related information through the third type ID, finally finds the corresponding target sound effect file through the third sound effect file path, and the client of the member in the control team plays the sound effect corresponding to the target sound effect file.
And only one audio ID is triggered by one audio event in the game, and the audio event is mapped to a corresponding sound effect file which should be played actually according to the sound effect ID and the team relationship of the sound effect receiver during playing. For convenience of introduction, please refer to table 5, where table 5 is an illustration of an audio playing configuration table, different audio files corresponding to the same audio ID, and a type ID is an audio ID of an audio receiver, and these two have a one-to-one correspondence relationship. When the type ID of the sound effect receiver is triggered, the corresponding sound effect file is called.
TABLE 5
Figure BDA0001948655670000151
The role A and the role B belong to a team 1, the role C and the role D belong to a team 2, and if the role A carries out a certain event (such as release of a large move), a sound effect ID is triggered, namely the role A is an event trigger, a client controlling the role C and a client controlling the role D play a sound effect file with a third type ID of 4, a third sound effect file path is obtained according to the sound effect ID and a third type identifier, namely the client receives the sound effect of 'kill _2_ 3'. The client controlling the character a can play the sound effect file with the type ID of 1, that is, the client receives the sound effect of "kill _2_ 1". The client controlling the character B can play the sound effect file with the type ID of 3, that is, the client receives the sound effect of "kill _2_ 2". The clients controlling the role a, the role B, the role C and the role D can respectively play the sound effect file with the type ID of 0, that is, the four clients receive the sound effect of "kill _ 3". Wherein the priority of playing can be pre-configured.
As can be seen from the above, the implementation corresponds to the following information:
Figure BDA0001948655670000161
further, in the embodiment of the present invention, if the sound effect receiving party is an out-of-team member, the third type identifier corresponding to the sound effect receiving party is obtained, then the third sound effect file path of the target sound effect file is obtained according to the target sound effect identifier and the third type identifier, and finally, the client plays the sound effect corresponding to the target sound effect file according to the third sound effect file path. Through the mode, when the role controlled by the client belongs to the members outside the team, the sound effect corresponding to the role setting can be played, and therefore the flexibility and diversity of the scheme are improved.
Optionally, on the basis of the embodiment corresponding to fig. 3, in an eighth optional embodiment of the method for providing sound effect playing according to the embodiment of the present invention, when the target event type of the audio event is detected, the method may further include:
determining a target video identifier corresponding to the target event type from a video identifier set according to the target event type, wherein the video identifier set comprises at least one video identifier, and the video identifier and the event type have a one-to-one correspondence relationship;
acquiring a target type identifier of a video receiver, wherein the target type identifier is used for representing the relation of a sound effect receiver in a team;
acquiring a target video file path of a target video file according to the target video identifier and the target type identifier, wherein the target video file path is used for indicating the position of the target video file;
and rendering the animation effect corresponding to the target video file according to the path of the target video file.
In this embodiment, a manner in which the client quickly locates the video to be played according to the video playing configuration table will be described, and it is understood that, for convenience of description, a client will be described as an example. Specifically, when the server detects a video event ID, the server may send the video event ID to the client, and the client may determine the target event type of the video event according to the video event ID. Optionally, when the server detects a video event, the server may automatically generate an event type ID of the video event, and the server may directly send the event type ID to the client, and the client may obtain the target event type of the video event according to the event type ID. The client acquires a target video ID corresponding to the target event type from the video ID set according to the target event type, where it should be noted that the video ID set includes at least one video ID, and there is a one-to-one correspondence relationship between the video ID and the event type.
For example, a video event that role 1 from camp a kills role 2 from camp B, the target event type of the video event is- "kill", and the client determines the corresponding target sound effect ID from the video playback configuration table as 5123704 according to the "kill" type of event. It can be understood that the target sound effect ID may be consistent with the target video ID, and thus, the same target ID may be used when synchronizing the video and audio information, which further reduces the difficulty of indexing by the client.
The client also needs to obtain the target type ID of the video receiver, wherein the video receiver is the role of the video to be played. Taking two avails as an example, assuming that role 1 of avail a kills role 2 of avail B, then role 1 is the event trigger and can be used as a "principal role", role 2 is an "enemy" relative to role 1, and other roles in avail a are "teammates" as well. Therefore, the server or the client determines which role is the event trigger according to the target event type, and further determines the relationship among the roles in different teams and the relationship among the different teams according to the event trigger. Through the relations, the client can obtain the target type ID of the video receiver, wherein the video receiver is an object of the video to be played. And the client locates a target video file path from the video playing configuration table according to the target video ID and the target type ID, wherein the target video file under the target video file path is the video file to be played.
For the convenience of understanding, a simplified video playing configuration table will be described as an example, and in practical applications, the video playing configuration table may also be set as other parameters, which is only an illustration here and should not be construed as a limitation to the present invention. Referring to table 6, table 6 is a simplified video playback configuration table.
TABLE 6
Figure BDA0001948655670000171
Firstly, determining the event as a killing event according to the type of the target event, then finding out a video ID-5123704 corresponding to the killing event from a video playing configuration table, and then determining that the event trigger is a role 1 killing role 2, and the current client is the client controlling the role 1, so that the target type ID of the client is 1, and thus, the path to the target video file is positioned as event/operator/hawk/vocal/kill _5_ 1.
The client may obtain the target video file locally according to the target video file path, or may download the target video file from the server, where the client renders an animation effect corresponding to the target video file on the interface.
It can be understood that the animation effect is an expression technique, which means that natural changes in the landscape are expressed in an effect picture through the combination of moving and static pictures. For example, after the character 1 successfully kills the character 2, the client interface of the character 1 is controlled to display the flickering animation effect of starlight, and the client interface of the character 2 is controlled to display the animation effect from day to rainy night. And displaying the animation effect of the firework explosion on the client interfaces corresponding to other characters belonging to the same team as the character 1. And displaying the animation effect of red light flashing on client interfaces corresponding to other characters belonging to the same team as the character 2.
Secondly, in the embodiment of the present invention, when the target event type is detected, the client may further determine a target video identifier corresponding to the target event type from the video identifier set according to the target event type, then the client obtains the target type identifier of the video receiver, next, the client obtains a target video file path of the target video file according to the target video identifier and the target type identifier, and finally, renders an animation effect corresponding to the target video file according to the target video file path. Through the method, the client can find the small event set from the large event set according to the event type when playing the audio, directly find the path of the video file from the small event set according to the type of the video receiver, and render the video file through the path. Meanwhile, in an actual scene, the client can not only play different sound effects according to different role positioning, but also display different animations, and further improve the diversity of the scheme.
Referring to fig. 9, fig. 9 is a schematic diagram of an embodiment of a client according to the present invention, where the client 20 includes:
a determining module 201, configured to, when a target event type of an audio event is detected, determine, according to the target event type, a target sound effect identifier corresponding to the target event type from a sound effect identifier set, where the sound effect identifier set includes at least one sound effect identifier, and a one-to-one correspondence relationship exists between the sound effect identifier and the event type;
an obtaining module 202, configured to obtain a target type identifier of a sound effect receiver, where the target type identifier belongs to one type identifier in a set of type identifiers, and the type identifiers are used to represent role types corresponding to different receivers;
the obtaining module 202 is further configured to obtain a path of a target sound effect file according to the target sound effect identifier determined by the determining module 201 and the target type identifier obtained by the obtaining module 202, where the path is used to indicate an index of the sound effect file, and the path and the type identifier have a corresponding relationship;
the playing module 203 is configured to play the sound effect corresponding to the target sound effect file according to the path acquired by the acquiring module 202.
In this embodiment, when a target event type of an audio event is detected, a determining module 201 determines a target sound effect identifier corresponding to the target event type from a sound effect identifier set according to the target event type, where the sound effect identifier set includes at least one sound effect identifier, and a one-to-one correspondence relationship exists between the sound effect identifier and the event type, an obtaining module 202 obtains a target type identifier of a sound effect receiver, where the target type identifier belongs to one type identifier in a type identifier set, the type identifiers are used to represent role types corresponding to different receivers, and the obtaining module 202 obtains a path of a target sound effect file according to the target sound effect identifier determined by the determining module 201 and the target type identifier obtained by the obtaining module 202, where the path is used to indicate an index of the sound effect file, and the path has a corresponding relationship with the type identifier, and the playing module 203 plays the sound effect corresponding to the target sound effect file according to the path acquired by the acquiring module 202.
In an embodiment of the present invention, a client is provided, when a target event type of an audio event is detected, the client can determine a target sound effect identification corresponding to the target event type from the sound effect identification set according to the target event type, wherein, the sound effect identification set comprises at least one sound effect identification, the sound effect identification and the event type have one-to-one correspondence, then the client acquires the target type identification of the sound effect receiver, wherein, the target type identification is used for representing the relation of the sound effect receiving party in the team, the sound effect receiving party is an object to be played with the sound effect, and then the path of the target sound effect file is obtained according to the target sound effect identification and the target type identification, the path is used for indicating the index of the sound effect file, the path and the type identification have a corresponding relation, and finally the client plays the sound effect corresponding to the target sound effect file according to the path. Through the method, the client can find the small event set from the large event set according to the event type when playing the audio, directly find the path of the audio file from the small event set according to the type of the audio receiver, and play the audio file through the path.
Optionally, on the basis of the embodiment corresponding to fig. 9, please refer to fig. 10, in another embodiment of the client 20 provided in the embodiment of the present invention, the client 20 further includes a receiving module 204;
the receiving module 204 is configured to, before the determining module 201 determines a target sound effect identifier corresponding to the target event type from the sound effect identifier set according to the target event type, obtain a sound effect configuration relationship, where the sound effect configuration relationship includes a correspondence between the sound effect identifier set, the event type set, the type identifier set, and the file path set, the sound effect identifier set includes the target sound effect identifier, the event type set includes the target event type, the type identifier set includes the target type identifier, and the file path set includes a path of the target sound effect file.
Secondly, in the embodiment of the present invention, a way of configuring an audio playing configuration table is introduced, that is, a sound effect configuration relationship needs to be obtained, where the sound effect configuration relationship includes a correspondence relationship between a sound effect identifier set, an event type set, a type identifier set, and a file path set. Through the mode, the user can flexibly configure the audio playing configuration table according to the applied scenes of different application types, so that the sound effect is close to the actual scene. In addition, the corresponding relation between the information in the audio playing configuration table can be added or deleted in the configuration process, and therefore the practicability and operability of the scheme are improved.
Alternatively, on the basis of the embodiment corresponding to fig. 10, in another embodiment of the client 20 provided in the embodiment of the present invention,
the receiving module 204 is specifically configured to receive an event type configuration instruction;
and generating the event type set according to the event type configuration instruction, wherein the event type set comprises an interaction event type and a stand-alone event type, the interaction event type represents an event type carried out between at least two roles, and the stand-alone event type represents an event type executed by one role.
In the embodiment of the present invention, how to configure event types in the audio playback configuration table is described, that is, the client receives an event type configuration instruction, and then may generate an event type set according to the event type configuration instruction, where the event type set includes an interaction event type and a stand-alone event type, the interaction event type represents an event type performed between at least two roles, and the stand-alone event type represents an event type performed by one role. Through the mode, the configured event type can be an interactive event or a single-machine time, so that the diversity of the event type is improved, and the event layout in a network game scene is better met.
Alternatively, on the basis of the embodiment corresponding to fig. 10, in another embodiment of the client 20 provided in the embodiment of the present invention,
the receiving module 204 is specifically configured to receive a file path configuration instruction;
generating the type identifier set according to the file path configuration instruction, wherein the type identifier set includes M file paths, M is an integer greater than or equal to 1, and the M file paths are stored in at least one storage module.
In the embodiment of the present invention, how to configure the target event type in the audio playback configuration table is described, that is, the client obtains M file paths, where M is an integer greater than or equal to 1, and the M file paths are stored in at least one storage module. Through the method, the user can set a specific file path for the sound effect file, the file path can be a local address of the client side and can also be an internal address of the server, and if the file path is stored in the address of the client side, the sound effect file to be played can be found in an offline state. If the sound effect file is stored in the address of the server, the server has larger storage capacity, so that more sound effect files can be stored. Therefore, the practicability and flexibility of the scheme are improved.
Alternatively, on the basis of the embodiment corresponding to fig. 10, in another embodiment of the client 20 provided in the embodiment of the present invention,
the receiving module 204 is specifically configured to receive a receiver configuration instruction;
generating the type identifier set according to the receiver configuration instruction, wherein the type identifier set comprises at least one of a first type identifier, a second type identifier, a third type identifier, a fourth type identifier and a fifth type identifier;
wherein the first type identification indicates that the sound effect receiving party is an event trigger;
the second type identification indicates that the sound effect receiving party is the event trigger and at least one member in the team, wherein the event trigger and the at least one member in the team both belong to a first team;
the third type identification indicates that the sound effect receiving party is the at least one member in the team;
the fourth type identification indicates that the sound effect receiving party is at least one member outside the team, the at least one member outside the team belongs to a second team, and the second team is a different team from the first team;
the fifth type identification indicates that the sound effect receiving party is the at least one in-team member and the at least one out-of-team member.
In the embodiment of the present invention, how to configure the type identifier in the audio playing configuration table is described, that is, the client generates at least one of a first type identifier, a second type identifier, a third type identifier, a fourth type identifier, and a fifth type identifier according to the receiver configuration instruction, where the first type identifier indicates that the sound receiver is an event trigger, the second type identifier indicates that the sound receiver is an event trigger and at least one in-team member, the third type identifier indicates that the sound receiver is at least one in-team member, the fourth type identifier indicates that the sound receiver is at least one out-team member, and the fifth type identifier indicates that the sound receiver is at least one in-team member and at least one out-team member. Through the mode, the user can flexibly configure different identifications for different role positioning, so that the diversity and the flexibility of the scheme are improved.
Optionally, on the basis of the embodiment corresponding to fig. 9 or fig. 10, in another embodiment of the client 20 provided in the embodiment of the present invention,
the obtaining module 202 is specifically configured to obtain a first type identifier corresponding to the sound effect receiving party if the sound effect receiving party is an event trigger party;
acquiring a first sound effect file path of a target sound effect file according to the target sound effect identification and the first type identification;
the playing module 203 is specifically configured to play the sound effect corresponding to the target sound effect file according to the first sound effect file path acquired by the acquiring module 202.
Further, in the embodiment of the present invention, if the sound effect receiving party is the event trigger party, the first type identifier corresponding to the sound effect receiving party is obtained, then the first sound effect file path of the target sound effect file is obtained according to the target sound effect identifier and the first type identifier, and finally, the client plays the sound effect corresponding to the target sound effect file according to the first sound effect file path. By the above mode, when the role controlled by the client belongs to the event trigger side, the sound effect corresponding to the role setting can be played, so that the flexibility and diversity of the scheme are improved.
Optionally, on the basis of the embodiment corresponding to fig. 9 or fig. 10, in another embodiment of the client 20 provided in the embodiment of the present invention,
the obtaining module 202 is specifically configured to obtain a second type identifier corresponding to the sound effect receiver if the sound effect receiver is a member in a team;
acquiring a second sound effect file path of the target sound effect file according to the target sound effect identification and the second type identification;
the playing module is specifically configured to play the sound effect corresponding to the target sound effect file according to the second sound effect file path acquired by the acquisition module.
Further, in the embodiment of the present invention, if the sound effect receiving party is an in-team member, the second type identifier corresponding to the sound effect receiving party is obtained, then the second sound effect file path of the target sound effect file is obtained according to the target sound effect identifier and the second type identifier, and finally, the client plays the sound effect corresponding to the target sound effect file according to the second sound effect file path. Through the mode, when the role controlled by the client belongs to members in a team, the sound effect corresponding to the role setting can be played, and therefore the flexibility and diversity of the scheme are improved.
Optionally, on the basis of the embodiment corresponding to fig. 9 or fig. 10, in another embodiment of the client 20 provided in the embodiment of the present invention,
the obtaining module 202 is specifically configured to obtain a third type identifier corresponding to the sound effect receiving party if the sound effect receiving party is an extrateam member;
acquiring a third sound effect file path of the target sound effect file according to the target sound effect identification and the third type identification;
the playing module is specifically configured to play the sound effect corresponding to the target sound effect file according to the third sound effect file path acquired by the acquisition module.
Further, in the embodiment of the present invention, if the sound effect receiving party is an out-of-team member, the third type identifier corresponding to the sound effect receiving party is obtained, then the third sound effect file path of the target sound effect file is obtained according to the target sound effect identifier and the third type identifier, and finally, the client plays the sound effect corresponding to the target sound effect file according to the third sound effect file path. Through the mode, when the role controlled by the client belongs to the members outside the team, the sound effect corresponding to the role setting can be played, and therefore the flexibility and diversity of the scheme are improved.
Optionally, on the basis of the embodiment corresponding to fig. 9, please refer to fig. 11, in another embodiment of the client 20 provided in the embodiment of the present invention, the client 20 further includes a rendering module 205;
the determining module 201 is further configured to determine, when a target event type of an audio event is detected, a target video identifier corresponding to the target event type from a video identifier set according to the target event type, where the video identifier set includes at least one video identifier, and a one-to-one correspondence relationship exists between the video identifier and the event type;
the obtaining module 202 is further configured to obtain a target type identifier of a video receiving party, where the target type identifier is used to represent a relationship of the sound effect receiving party in a team;
the obtaining module 202 is further configured to obtain a target video file path of a target video file according to the target video identifier determined by the determining module 201 and the target type identifier obtained by the obtaining module 202, where the target video file path is used to indicate a position of the target video file;
the rendering module 205 is configured to render an animation effect corresponding to the target video file according to the target video file path obtained by the obtaining module 202.
Secondly, in the embodiment of the present invention, when the target event type is detected, the client may further determine a target video identifier corresponding to the target event type from the video identifier set according to the target event type, then the client obtains the target type identifier of the video receiver, next, the client obtains a target video file path of the target video file according to the target video identifier and the target type identifier, and finally, renders an animation effect corresponding to the target video file according to the target video file path. Through the method, the client can find the small event set from the large event set according to the event type when playing the audio, directly find the path of the video file from the small event set according to the type of the video receiver, and render the video file through the path. Meanwhile, in an actual scene, the client can not only play different sound effects according to different role positioning, but also display different animations, and further improve the diversity of the scheme.
As shown in fig. 12, for convenience of description, only the parts related to the embodiment of the present invention are shown, and details of the specific technology are not disclosed, please refer to the method part in the embodiment of the present invention. The terminal device may be any terminal including a mobile phone, a tablet computer, a Personal Digital Assistant (PDA), a Point of Sales (POS), a vehicle-mounted computer, and the like, taking the terminal device as the mobile phone as an example:
fig. 12 is a block diagram showing a partial structure of a cellular phone related to a terminal device provided in an embodiment of the present invention. Referring to fig. 12, the cellular phone includes: radio Frequency (RF) circuit 310, memory 320, input unit 330, display unit 340, sensor 350, audio circuit 360, wireless fidelity (WiFi) module 370, processor 380, and power supply 390. Those skilled in the art will appreciate that the handset configuration shown in fig. 12 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile phone in detail with reference to fig. 12:
the RF circuit 310 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, receives downlink information of a base station and then processes the received downlink information to the processor 380; in addition, the data for designing uplink is transmitted to the base station. In general, the RF circuit 310 includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, RF circuit 310 may also communicate with networks and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Messaging Service (SMS), and the like.
The memory 320 may be used to store software programs and modules, and the processor 380 executes various functional applications and data processing of the mobile phone by operating the software programs and modules stored in the memory 320. The memory 320 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 320 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 330 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone. Specifically, the input unit 330 may include a touch panel 331 and other input devices 332. The touch panel 331, also referred to as a touch screen, can collect touch operations of a user (e.g., operations of the user on the touch panel 331 or near the touch panel 331 using any suitable object or accessory such as a finger, a stylus, etc.) on or near the touch panel 331, and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 331 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 380, and can receive and execute commands sent by the processor 380. In addition, the touch panel 331 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 330 may include other input devices 332 in addition to the touch panel 331. In particular, other input devices 332 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 340 may be used to display information input by the user or information provided to the user and various menus of the mobile phone. The Display unit 340 may include a Display panel 341, and optionally, the Display panel 341 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 331 can cover the display panel 341, and when the touch panel 331 detects a touch operation on or near the touch panel 331, the touch panel is transmitted to the processor 380 to determine the type of the touch event, and then the processor 380 provides a corresponding visual output on the display panel 341 according to the type of the touch event. Although in fig. 12, the touch panel 331 and the display panel 341 are two independent components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 331 and the display panel 341 may be integrated to implement the input and output functions of the mobile phone.
The handset may also include at least one sensor 350, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display panel 341 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 341 and/or the backlight when the mobile phone is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
Audio circuitry 360, speaker 361, microphone 362 may provide an audio interface between the user and the handset. The audio circuit 360 may transmit the electrical signal converted from the received audio data to the speaker 361, and the audio signal is converted by the speaker 361 and output; on the other hand, the microphone 362 converts the collected sound signals into electrical signals, which are received by the audio circuit 360 and converted into audio data, which are then processed by the audio data output processor 380 and then transmitted to, for example, another cellular phone via the RF circuit 310, or output to the memory 320 for further processing.
WiFi belongs to short-distance wireless transmission technology, and the mobile phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 370, and provides wireless broadband internet access for the user. Although fig. 12 shows the WiFi module 370, it is understood that it does not belong to the essential constitution of the handset, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 380 is a control center of the mobile phone, connects various parts of the whole mobile phone by using various interfaces and lines, and performs various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 320 and calling data stored in the memory 320, thereby performing overall monitoring of the mobile phone. Optionally, processor 380 may include one or more processing units; optionally, processor 380 may integrate an application processor, which primarily handles operating systems, user interfaces, application programs, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 380.
The handset also includes a power supply 390 (e.g., a battery) for powering the various components, optionally, the power supply may be logically connected to the processor 380 through a power management system, so that the power management system may be used to manage charging, discharging, and power consumption.
Although not shown, the mobile phone may further include a camera, a bluetooth module, etc., which are not described herein.
In this embodiment of the present invention, the processor 380 included in the terminal device further has the following functions:
when a target event type of an audio event is detected, determining a target sound effect identification corresponding to the target event type from a sound effect identification set according to the target event type, wherein the sound effect identification set comprises at least one sound effect identification, and the sound effect identification and the event type have a one-to-one correspondence relationship;
acquiring a target type identifier of a sound effect receiver, wherein the target type identifier belongs to one type identifier in a type identifier set, and the type identifier is used for representing role types corresponding to different receivers;
acquiring a path of a target sound effect file according to the target sound effect identification and the target type identification, wherein the path is used for indicating an index of the sound effect file, and the path and the type identification have a corresponding relation;
and playing the sound effect corresponding to the target sound effect file according to the path.
Optionally, the processor 380 is further configured to perform the following steps:
acquiring a sound effect configuration relation, wherein the sound effect configuration relation comprises a corresponding relation among a sound effect identification set, an event type set, a type identification set and a file path set, the sound effect identification set comprises a target sound effect identification, the event type set comprises a target event type, the type identification set comprises a target type identification, and the file path set comprises a path of a target sound effect file.
Optionally, the processor 380 is specifically configured to perform the following steps:
receiving an event type configuration instruction;
and generating the event type set according to the event type configuration instruction, wherein the event type set comprises an interaction event type and a stand-alone event type, the interaction event type represents an event type carried out between at least two roles, and the stand-alone event type represents an event type executed by one role.
Optionally, the processor 380 is specifically configured to perform the following steps:
receiving a file path configuration instruction;
generating the type identifier set according to the file path configuration instruction, wherein the type identifier set includes M file paths, M is an integer greater than or equal to 1, and the M file paths are stored in at least one storage module.
Optionally, the processor 380 is specifically configured to perform the following steps:
receiving a configuration instruction of a receiver;
generating the type identifier set according to the receiver configuration instruction, wherein the type identifier set comprises at least one of a first type identifier, a second type identifier, a third type identifier, a fourth type identifier and a fifth type identifier;
wherein the first type identification indicates that the sound effect receiving party is an event trigger;
the second type identification indicates that the sound effect receiving party is the event trigger and at least one member in the team, wherein the event trigger and the at least one member in the team both belong to a first team;
the third type identification indicates that the sound effect receiving party is the at least one member in the team;
the fourth type identification indicates that the sound effect receiving party is at least one member outside the team, the at least one member outside the team belongs to a second team, and the second team is a different team from the first team;
the fifth type identification indicates that the sound effect receiving party is the at least one in-team member and the at least one out-of-team member.
Optionally, the processor 380 is specifically configured to perform the following steps:
if the sound effect receiving party is an event trigger party, acquiring a first type identification corresponding to the sound effect receiving party;
acquiring a first sound effect file path of a target sound effect file according to the target sound effect identification and the first type identification;
and playing the sound effect corresponding to the target sound effect file according to the first sound effect file path.
Optionally, the processor 380 is specifically configured to perform the following steps:
if the sound effect receiving party is a member in the team, acquiring a second type identification corresponding to the sound effect receiving party;
acquiring a second sound effect file path of the target sound effect file according to the target sound effect identification and the second type identification;
and playing the sound effect corresponding to the target sound effect file according to the second sound effect file path.
Optionally, the processor 380 is specifically configured to perform the following steps:
if the sound effect receiving party is an out-of-team member, acquiring a third type identifier corresponding to the sound effect receiving party;
acquiring a third sound effect file path of the target sound effect file according to the target sound effect identification and the third type identification;
and playing the sound effect corresponding to the target sound effect file according to the third sound effect file path.
Optionally, the processor 380 is further configured to perform the following steps:
determining a target video identifier corresponding to the target event type from a video identifier set according to the target event type, wherein the video identifier set comprises at least one video identifier, and the video identifier and the event type have a one-to-one correspondence relationship;
acquiring a target type identifier of a video receiver, wherein the target type identifier is used for representing the relation of the sound effect receiver in a team;
acquiring a target video file path of a target video file according to the target video identifier and the target type identifier, wherein the target video file path is used for indicating the position of the target video file;
and rendering the animation effect corresponding to the target video file according to the target video file path.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (12)

1. A method for playing sound effect is characterized by comprising the following steps:
when a target event type of an audio event is detected, determining a target sound effect identification corresponding to the target event type from a sound effect identification set according to the target event type, wherein the sound effect identification set comprises at least one sound effect identification, and the sound effect identification and the event type have a one-to-one correspondence relationship;
determining an event trigger according to the target event type, and determining the relationship among all roles in different teams and the relationship among different teams according to the event trigger; acquiring a target type identifier of a sound effect receiver according to the relationship between the roles in different teams and the relationship between the different teams, wherein the target type identifier belongs to one type identifier in a type identifier set, and the type identifier is used for representing the role types corresponding to the different receivers;
acquiring a path of a target sound effect file according to the target sound effect identification and the target type identification, wherein the path is used for indicating an index of the target sound effect file, and the path and the type identification have a corresponding relation;
and playing the sound effect corresponding to the target sound effect file according to the path of the target sound effect file.
2. The method according to claim 1, wherein before determining a target sound effect identification corresponding to the target event type from a set of sound effect identifications according to the target event type, the method further comprises:
acquiring a sound effect configuration relation, wherein the sound effect configuration relation comprises a corresponding relation among a sound effect identification set, an event type set, a type identification set and a file path set, the sound effect identification set comprises a target sound effect identification, the event type set comprises a target event type, the type identification set comprises a target type identification, and the file path set comprises a path of a target sound effect file.
3. The method of claim 2, wherein the obtaining the sound effect configuration relationship comprises:
receiving an event type configuration instruction;
and generating the event type set according to the event type configuration instruction, wherein the event type set comprises an interaction event type and a stand-alone event type, the interaction event type represents an event type carried out between at least two roles, and the stand-alone event type represents an event type executed by one role.
4. The method of claim 2, wherein the obtaining the sound effect configuration relationship comprises:
receiving a file path configuration instruction;
generating the type identifier set according to the file path configuration instruction, wherein the type identifier set includes M file paths, M is an integer greater than or equal to 1, and the M file paths are stored in at least one storage module.
5. The method of claim 2, wherein the obtaining the sound effect configuration relationship comprises:
receiving a configuration instruction of a receiver;
generating the type identifier set according to the receiver configuration instruction, wherein the type identifier set comprises at least one of a first type identifier, a second type identifier, a third type identifier, a fourth type identifier and a fifth type identifier;
wherein the first type identification indicates that the sound effect receiving party is an event trigger;
the second type identification indicates that the sound effect receiving party is the event trigger and at least one member in the team, wherein the event trigger and the at least one member in the team both belong to a first team;
the third type identification indicates that the sound effect receiving party is the at least one member in the team;
the fourth type identification indicates that the sound effect receiving party is at least one member outside the team, the at least one member outside the team belongs to a second team, and the second team is a different team from the first team;
the fifth type identification indicates that the sound effect receiving party is the at least one in-team member and the at least one out-of-team member.
6. The method according to any one of claims 1 to 5, wherein the obtaining of the target type identification of the sound effect receiving party comprises:
if the sound effect receiving party is an event trigger party, acquiring a first type identification corresponding to the sound effect receiving party;
the obtaining of the path of the target sound effect file according to the target sound effect identification and the target type identification comprises the following steps:
acquiring a first sound effect file path of a target sound effect file according to the target sound effect identification and the first type identification;
the playing the sound effect corresponding to the target sound effect file according to the path comprises the following steps:
and playing the sound effect corresponding to the target sound effect file according to the first sound effect file path.
7. The method according to any one of claims 1 to 5, wherein the obtaining of the target type identification of the sound effect receiving party comprises:
if the sound effect receiving party is a member in the team, acquiring a second type identification corresponding to the sound effect receiving party;
the obtaining of the path of the target sound effect file according to the target sound effect identification and the target type identification comprises the following steps:
acquiring a second sound effect file path of the target sound effect file according to the target sound effect identification and the second type identification;
the playing the sound effect corresponding to the target sound effect file according to the path comprises the following steps:
and playing the sound effect corresponding to the target sound effect file according to the second sound effect file path.
8. The method according to any one of claims 1 to 5, wherein the obtaining of the target type identification of the sound effect receiving party comprises:
if the sound effect receiving party is an out-of-team member, acquiring a third type identifier corresponding to the sound effect receiving party;
the obtaining of the path of the target sound effect file according to the target sound effect identification and the target type identification comprises the following steps:
acquiring a third sound effect file path of the target sound effect file according to the target sound effect identification and the third type identification;
the playing the sound effect corresponding to the target sound effect file according to the path comprises the following steps:
and playing the sound effect corresponding to the target sound effect file according to the third sound effect file path.
9. The method of claim 1, wherein when a target event type of an audio event is detected, the method further comprises:
determining a target video identifier corresponding to the target event type from a video identifier set according to the target event type, wherein the video identifier set comprises at least one video identifier, and the video identifier and the event type have a one-to-one correspondence relationship;
acquiring a target type identifier of a video receiver, wherein the target type identifier is used for representing the relation of the video receiver in a team;
acquiring a target video file path of a target video file according to the target video identifier and the target type identifier, wherein the target video file path is used for indicating the position of the target video file;
and rendering the animation effect corresponding to the target video file according to the target video file path.
10. A client, comprising:
the device comprises a determining module, a judging module and a judging module, wherein the determining module is used for determining a target sound effect identification corresponding to a target event type from a sound effect identification set according to the target event type when the target event type of an audio event is detected, the sound effect identification set comprises at least one sound effect identification, and the sound effect identification and the event type have one-to-one correspondence relation;
the acquisition module is used for determining an event trigger according to the target event type, and determining the relationship among all roles in different teams and the relationship among the different teams according to the event trigger; acquiring a target type identifier of a sound effect receiver according to the relationship between the roles in different teams and the relationship between the different teams, wherein the target type identifier belongs to one type identifier in a type identifier set, and the type identifier is used for representing the role types corresponding to the different receivers;
the obtaining module is further configured to obtain a path of a target sound effect file according to the target sound effect identifier determined by the determining module and the target type identifier obtained by the obtaining module, where the path is used to indicate an index of the sound effect file, and the path and the type identifier have a corresponding relationship;
and the playing module is used for playing the sound effect corresponding to the target sound effect file according to the path acquired by the acquisition module.
11. A terminal device, comprising: a memory, a transceiver, a processor, and a bus system;
wherein the memory is used for storing programs;
the processor is used for executing the program in the memory and comprises the following steps:
when a target event type of an audio event is detected, determining a target sound effect identification corresponding to the target event type from a sound effect identification set according to the target event type, wherein the sound effect identification set comprises at least one sound effect identification, and the sound effect identification and the event type have a one-to-one correspondence relationship;
determining an event trigger according to the target event type, and determining the relationship among all roles in different teams and the relationship among different teams according to the event trigger; acquiring a target type identifier of a sound effect receiver according to the relationship between the roles in different teams and the relationship between the different teams, wherein the target type identifier belongs to one type identifier in a type identifier set, and the type identifier is used for representing the role types corresponding to the different receivers;
acquiring a path of a target sound effect file according to the target sound effect identification and the target type identification, wherein the path is used for indicating an index of the sound effect file, and the path and the type identification have a corresponding relation;
playing the sound effect corresponding to the target sound effect file according to the path of the target sound effect file;
the bus system is used for connecting the memory and the processor so as to enable the memory and the processor to communicate.
12. A computer-readable storage medium comprising instructions that, when executed on a computer, cause the computer to perform the method of any of claims 1 to 9.
CN201910044352.XA 2019-01-17 2019-01-17 Sound effect playing method and related device Active CN109857363B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910044352.XA CN109857363B (en) 2019-01-17 2019-01-17 Sound effect playing method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910044352.XA CN109857363B (en) 2019-01-17 2019-01-17 Sound effect playing method and related device

Publications (2)

Publication Number Publication Date
CN109857363A CN109857363A (en) 2019-06-07
CN109857363B true CN109857363B (en) 2021-10-22

Family

ID=66895103

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910044352.XA Active CN109857363B (en) 2019-01-17 2019-01-17 Sound effect playing method and related device

Country Status (1)

Country Link
CN (1) CN109857363B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112044070B (en) * 2020-09-04 2022-05-24 腾讯科技(深圳)有限公司 Virtual unit display method, device, terminal and storage medium
WO2023075706A2 (en) * 2021-11-01 2023-05-04 Garena Online Private Limited Method of using scriptable objects to insert audio features into a program

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104841131A (en) * 2014-02-18 2015-08-19 腾讯科技(深圳)有限公司 Audio frequency control method and apparatus
CN106371797A (en) * 2016-08-31 2017-02-01 腾讯科技(深圳)有限公司 Method and device for configuring sound effect
CN107115672A (en) * 2016-02-24 2017-09-01 网易(杭州)网络有限公司 Gaming audio resource player method, device and games system
CN108888957A (en) * 2018-07-24 2018-11-27 合肥爱玩动漫有限公司 A method of by the customized game music of player and audio
CN109173259A (en) * 2018-07-17 2019-01-11 派视觉虚拟现实(深圳)软件技术有限公司 Audio optimization method, device and equipment in a kind of game

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7085722B2 (en) * 2001-05-14 2006-08-01 Sony Computer Entertainment America Inc. System and method for menu-driven voice control of characters in a game environment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104841131A (en) * 2014-02-18 2015-08-19 腾讯科技(深圳)有限公司 Audio frequency control method and apparatus
CN107115672A (en) * 2016-02-24 2017-09-01 网易(杭州)网络有限公司 Gaming audio resource player method, device and games system
CN106371797A (en) * 2016-08-31 2017-02-01 腾讯科技(深圳)有限公司 Method and device for configuring sound effect
CN109173259A (en) * 2018-07-17 2019-01-11 派视觉虚拟现实(深圳)软件技术有限公司 Audio optimization method, device and equipment in a kind of game
CN108888957A (en) * 2018-07-24 2018-11-27 合肥爱玩动漫有限公司 A method of by the customized game music of player and audio

Also Published As

Publication number Publication date
CN109857363A (en) 2019-06-07

Similar Documents

Publication Publication Date Title
CN111773696B (en) Virtual object display method, related device and storage medium
CN111686447B (en) Method and related device for processing data in virtual scene
JP2022525172A (en) Virtual object control methods, devices, computer equipment and programs
CN113101652A (en) Information display method and device, computer equipment and storage medium
CN109550244A (en) A kind of method and relevant apparatus of role state switching
WO2022142622A1 (en) Method and apparatus for selecting virtual object interaction mode, device, medium, and product
CN109857363B (en) Sound effect playing method and related device
CN109173250B (en) Multi-role control method, computer storage medium and terminal
WO2021143253A1 (en) Method and apparatus for operating virtual prop in virtual environment, device, and readable medium
CN107754316B (en) Information exchange processing method and mobile terminal
CN109758766B (en) Role state synchronization method and related device
CN109529335B (en) Game role sound effect processing method and device, mobile terminal and storage medium
CN109718552B (en) Life value control method based on simulation object and client
CN114225412A (en) Information processing method, information processing device, computer equipment and storage medium
CN113633985A (en) Using method of virtual accessory, related device, equipment and storage medium
CN112316423A (en) Method, device, equipment and medium for displaying state change of virtual object
CN112057862B (en) Interaction method and related device
CN111760283B (en) Skill distribution method and device for virtual object, terminal and readable storage medium
CN116099199A (en) Game skill processing method, game skill processing device, computer equipment and storage medium
CN117643723A (en) Game interaction method, game interaction device, computer equipment and computer readable storage medium
CN116328301A (en) Information prompting method, device, computer equipment and storage medium
CN114042322A (en) Animation display method and device, computer equipment and storage medium
CN115869623A (en) Virtual weapon processing method and device, computer equipment and storage medium
CN117482523A (en) Game interaction method, game interaction device, computer equipment and computer readable storage medium
CN115317893A (en) Virtual resource processing method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant