CN114887327A - Sound effect playing control method and device and electronic equipment - Google Patents

Sound effect playing control method and device and electronic equipment Download PDF

Info

Publication number
CN114887327A
CN114887327A CN202210393780.5A CN202210393780A CN114887327A CN 114887327 A CN114887327 A CN 114887327A CN 202210393780 A CN202210393780 A CN 202210393780A CN 114887327 A CN114887327 A CN 114887327A
Authority
CN
China
Prior art keywords
sound effect
target
target object
sound
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210393780.5A
Other languages
Chinese (zh)
Inventor
杨子强
李喆超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202210393780.5A priority Critical patent/CN114887327A/en
Publication of CN114887327A publication Critical patent/CN114887327A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/54Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/6063Methods for processing data by generating or executing the game program for sound processing
    • A63F2300/6081Methods for processing data by generating or executing the game program for sound processing generating an output signal, e.g. under timing constraints, for spatialization
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/807Role playing or strategy games

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Stereophonic System (AREA)

Abstract

The invention provides a method, a device and electronic equipment for controlling the playing of sound effect, which respond to a sound effect trigger event and determine a target object for triggering the sound effect and a target sound effect to be played; acquiring the visibility state of the target object; the visibility state indicates whether the target object can be seen by the controlled virtual object or other virtual objects; determining a playing mode of a target sound effect and playing the target sound effect based on the visibility state of the target object and the position of a virtual camera corresponding to the controlled virtual object in a game scene; and responding to the change of the visibility state of the target object, updating the playing mode of the target sound effect, and continuing to play the target sound effect based on the updated playing mode. In the above manner, the sound effect playing mode is determined based on the visibility state of the target object, and when the visibility changes, the sound effect playing mode is adjusted, so that the current player can hear the sound effect of the visible object, thereby obtaining the game fighting condition of the whole game scene and improving the user experience.

Description

Sound effect playing control method and device and electronic equipment
Technical Field
The invention relates to the technical field of online games, in particular to a method and a device for controlling the playing of sound effects and electronic equipment.
Background
In a game scene, in order to improve the user experience of a player, some actions of a game object are accompanied by sound effects, also called sound effects; if the game object starts to walk, the sound effect of the walking effect can be triggered, and the game object starts the skill, so that the sound effect corresponding to the skill can be triggered.
In the related technology, in a multi-player online tactical competitive game scene, when a sound effect is triggered, whether a game object is in a game interface display range corresponding to the current player role is judged, and if the game object is in the range, the sound effect is played all the time within the duration time of the sound effect; if not, the sound effect is not played for the duration of the sound effect. However, in a game scene, in addition to a game interface display range corresponding to a current player character, a game scene area outside the range generally exists, and the player cannot judge the game situation of a remote battlefield outside the game interface display range based on the sound effect playing manner, so that the user experience is poor; in addition, some areas where game objects can be hidden may exist in the display range of the game interface, and playing sound effects based on the above manner can generate a situation that the roles of the game objects are hidden and invisible, and the sound effects are played continuously, which can also lead to poor user experience.
Disclosure of Invention
In view of this, the present invention provides a method, an apparatus, and an electronic device for controlling playback of a sound effect, so as to improve the accuracy of the position of a target object in a target guidance process, and avoid the occurrence of target omission, thereby improving user experience. .
In a first aspect, an embodiment of the present invention provides a method for controlling playback of a sound effect, where a terminal device provides a graphical user interface, where the graphical user interface includes a scene picture of a part of a game scene, and the scene picture is obtained by shooting a virtual camera corresponding to a controlled virtual object in the game scene; the method comprises the following steps: responding to a sound effect trigger event, and determining a target object for triggering a sound effect and a target sound effect to be played; acquiring the visibility state of a target object; wherein the visibility state is to: indicating whether the target object is viewable by the controlled virtual object or a virtual object other than the controlled virtual object; determining a playing mode of a target sound effect and playing the target sound effect based on the visibility state of the target object and the position of a virtual camera corresponding to the controlled virtual object in a game scene; and responding to the change of the visibility state of the target object, updating the playing mode of the target sound effect, and continuing to play the target sound effect based on the updated playing mode.
The game scene comprises a hidden area; visibility states include visible or invisible; the step of obtaining the visibility state of the target object comprises the following steps: acquiring the position of a target object in a game scene; judging whether the target object is located in the hidden area or not based on the position of the target object; if the target object is located in the hidden area, determining the visibility state of the target object to be invisible; and if the target object is not located in the hidden area, determining the visibility state of the target object to be visible.
The step of determining the playing mode of the target sound effect based on the visibility state of the target object and the position of the virtual camera corresponding to the controlled virtual object in the game scene includes: determining the sound source position of the target object based on the visibility state of the target object; and determining the playing mode of the target sound effect based on the sound source position of the target object and the position of the virtual camera corresponding to the controlled virtual object in the game scene.
The visibility state includes visible or invisible; the step of determining the sound source position corresponding to the target object based on the visibility state of the target object comprises the following steps: if the visibility state of the target object is visible, determining the position of the target object in the game scene as the sound source position corresponding to the target object; if the visibility state of the target object is invisible, determining a preset position in the game scene as a sound source position corresponding to the target object; the distance between the preset position and the position of the virtual camera corresponding to the controlled virtual object in the game scene is larger than a preset distance threshold value.
The step of determining the playing mode of the target sound effect based on the sound source position of the target object and the position of the virtual camera corresponding to the controlled virtual object in the game scene comprises the following steps: determining a volume parameter of a target sound effect based on the sound source position of the target object and the position of a virtual camera corresponding to the controlled virtual object in a game scene; the volume parameter represents the influence of the distance between the sound source position and the virtual object on the volume of the target sound effect; and determining the playing mode of the target sound effect based on the volume parameter of the target sound effect.
The volume parameter comprises an attenuation coefficient; the method comprises the following steps of determining volume parameters of a target sound effect based on the sound source position of a target object and the position of a virtual camera corresponding to a controlled virtual object in a game scene, wherein the steps comprise: calculating the distance between the positions of the sound source and the positions of the virtual cameras corresponding to the controlled virtual objects in the game scene; determining the attenuation coefficient of the target sound effect based on the predetermined attenuation curve of the target sound effect and the distance between the sound source position and the listener position; the attenuation curve indicates a trend of the volume of the target sound effect with the distance between the sound source position of the target sound effect and the listener position.
The step of determining the playing mode of the target sound effect based on the playing parameters of the target sound effect comprises the following steps: determining the volume of the target sound effect based on the volume parameter; judging whether the volume of the target sound effect is larger than or equal to a preset volume threshold value or not; if not, determining the playing mode of the target sound effect as follows: playing a target sound effect through a virtual sound part of a preset sound engine; if yes, the playing mode of the target sound effect is determined as follows: and playing the target sound effect through a preset real sound part of the sound engine.
The target sound effect comprises a plurality of sound effects; determining the playing mode of the target sound effect as follows: before the step of playing the target sound effect through the real sound part of the preset sound engine, the method further comprises the following steps: judging whether the number of the target sound effects is less than or equal to a preset sound number threshold value or not; if not, determining the playing mode of the target sound effect with the lowest preset priority in the target sound effects as follows: playing a target sound effect through a virtual sound part of a preset sound engine; and continuing to execute the step of judging whether the number of the target sound effects is less than or equal to a preset sound number threshold value until the number of the target sound effects is equal to the sound number threshold value.
In a second aspect, an embodiment of the present invention provides a sound effect playing control apparatus, where a terminal device provides a graphical user interface, where the graphical user interface includes a scene picture of a part of a game scene, and the scene picture is obtained by shooting a virtual camera corresponding to a controlled virtual object in the game scene; the device includes: the target sound effect determining module is used for responding to the sound effect triggering event and determining a target object triggering the sound effect and the target sound effect to be played; the visibility state acquisition module is used for acquiring the visibility state of the target object; wherein the visibility state is to: indicating whether the target object is viewable by the controlled virtual object or a virtual object other than the controlled virtual object; the playing mode determining module is used for determining the playing mode of the target sound effect and playing the target sound effect based on the visibility state of the target object and the position of the virtual camera corresponding to the controlled virtual object in the game scene; and the updating module is used for responding to the change of the visibility state of the target object, updating the playing mode of the target sound effect and continuously playing the target sound effect based on the updated playing mode.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a processor and a memory, where the memory stores machine executable instructions capable of being executed by the processor, and the processor executes the machine executable instructions to implement the method for controlling playback of sound effects described above.
In a fourth aspect, embodiments of the present invention provide a machine-readable storage medium storing machine-executable instructions, which when invoked and executed by a processor, cause the processor to implement the method for controlling playback of sound effects described above.
The embodiment of the invention brings the following beneficial effects:
the method, the device and the electronic equipment for controlling the playing of the sound effect respond to a sound effect trigger event, and determine a target object for triggering the sound effect and a target sound effect to be played; acquiring the visibility state of the target object; the visibility state is used to indicate whether the target object can be seen by the controlled virtual object or other virtual objects; determining a playing mode of a target sound effect and playing the target sound effect based on the visibility state of the target object and the position of a virtual camera corresponding to the controlled virtual object in a game scene; and responding to the change of the visibility state of the target object, updating the playing mode of the target sound effect, and continuing to play the target sound effect based on the updated playing mode. In the above manner, the sound effect playing mode is determined based on the visibility state of the target object, and when the visibility changes, the sound effect playing mode is adjusted, so that the player controlling the virtual object can hear the sound effect of the visible object, thereby obtaining the game situation in the whole game scene and improving the user experience.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a method for controlling sound effect playing according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an attenuation curve provided by an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a sound effect playing control apparatus according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Compared with 3D games, Multiplayer Online Battle Arena (MOBA) has a wider range of game scenes and a battlefield layout is generally a plane. The MOBA game has very high sports and appreciation. Audio, an integral part of the game, can give players accurate sound localization and battlefield information. In order to enhance the competitive experience of the players and more accurately judge the information and the situation on the battlefield from the sound, the functions of audio attenuation and audio visibility in the games are generally required to be updated and upgraded.
To further understand the principle of audio playing in games, the following concepts need to be understood:
listener: representing the position of the microphone in the game, the 3D sound can be oriented in the speaker to simulate a real 3D environment. There are generally two listener modes in a game, one being the dual listener mode: in the survival state of the player, the Listener is bound on the character and faces a fixed upper direction, can hear the surrounding sound of the actual position of the character, and is not related to the shot: when the player is dead, the Listener is bound on the camera lens and plays the audio according to the relative position of the sound body and the lens; one is a single Listener mode: and playing the sound according to the relative position of the sound-producing body and the lens. The sound direction was consistent between survival and death.
Game Object: entities that can connect elements such as interfaces, sounds, and triggers. In audio solutions (such as Wwise), sound can be played by sending an Event (sound effect triggering Event) to the object registered in the game, possibly also specifying a location for the object in the game.
Virtual voice (Virtual channel): a virtual environment in a sound engine where sound and vibration are managed and monitored by the sound engine but no processing is performed. When the volume level of the sounds is below a threshold or the number of sounds exceeds a playback limit, the sounds are allowed to enter the dummy sound part.
Attenuation curve: the intensity of the sound signal is influenced according to the distance between the sound emitter and the Listener.
In the related art, the playing logic of sound effects in the MOBA game is as follows: the visibility of the game object (game unit model and game special effect) as the sound effect carrier is used as the judgment basis of the corresponding sound effect playing mode. That is, the sound effect is played when the game object is visible, and is not played otherwise. This can save unnecessary consumption, but at the same time results in that even if the visibility of the game object changes, the sound effect does not change accordingly. Furthermore, the game object is visible in the game scene displayed on the screen, and the game object is not visible in the game scene displayed on the screen.
In the sound effect playing mode, when the player is in a living state, the moving lens cannot hear the sound of a remote enemy or a team friend, and the battlefield condition can be judged only through vision, so that some competitive performance is relatively lacked. When the death state and the survival state are switched, great hearing sense and function difference exist. In the existing game engine settings, sound is not triggered by skills that are outside the screen but within view, resulting in failure to receive sound information from a far battlefield. For sounds that are out of view or not visible, the lack of real-time visibility of each game object (game object) results in the sounds not being able to stop or being forced to play in synchronization with the vision. Based on this, the method, the device and the electronic equipment for controlling the playing of the sound effect provided by the embodiment of the invention can be applied to the playing control of the sound effect in various game scenes.
The method for controlling the playing of the sound effect in one embodiment of the present disclosure may be executed in a local terminal device or a server. When the playing control method of the sound effect runs on the server, the method can be implemented and executed based on a cloud interaction system, wherein the cloud interaction system comprises the server and the client device.
In an optional embodiment, various cloud applications may be run under the cloud interaction system, for example: and (5) cloud games. Taking a cloud game as an example, a cloud game refers to a game mode based on cloud computing. In the cloud game operation mode, the game program operation main body and the game picture presentation main body are separated, the storage and operation of the sound effect playing control method are completed on the cloud game server, and the client device is used for receiving and sending data and presenting the game picture, for example, the client device can be a display device with a data transmission function close to a user side, such as a mobile terminal, a television, a computer, a palm computer and the like; but the cloud game server which performs information processing is a cloud. When a game is played, a player operates the client device to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, data such as game pictures and the like are encoded and compressed, the data are returned to the client device through a network, and finally the data are decoded through the client device and the game pictures are output. In an optional implementation manner, taking a game as an example, the local terminal device stores a game program and is used for presenting a game screen. The local terminal device is used for interacting with the player through a graphical user interface, namely, a game program is downloaded and installed and operated through an electronic device conventionally. The manner in which the local terminal device provides the graphical user interface to the player may include a variety of ways, for example, it may be rendered for display on a display screen of the terminal or provided to the player through holographic projection. For example, the local terminal device may include a display screen for presenting a graphical user interface including a game screen and a processor for running the game, generating the graphical user interface, and controlling display of the graphical user interface on the display screen.
In a possible implementation manner, an embodiment of the present invention provides a method for controlling playback of sound effects, where a graphical user interface is provided through a terminal device, where the terminal device may be the aforementioned local terminal device, and may also be the aforementioned client device in a cloud interaction system. The graphical user interface comprises scene pictures of part of game scenes, and the scene pictures are shot in the game scenes by virtual cameras corresponding to the controlled virtual objects; as shown in fig. 1, the method comprises the steps of:
step S102, responding to the sound effect triggering event, and determining a target object triggering the sound effect and a target sound effect to be played.
The target object may be a system virtual object in a game scene corresponding to a controlled virtual object operated and controlled by a current player, such as a non-player character (NPC), or may be a controlled virtual object operated and controlled by another player in the game scene. The target object may be a movable virtual object (may also be referred to as a "game object"), such as a hero character, a pet character, or the like, or a fixed virtual object, such as a defense tower.
The sound effect triggering event may include various events, such as the target object starts walking, the target object triggers a skill or some other action of the virtual object to affect the state of the target object, such as the target character is attacked by an enemy, an injured sound is generated, and the like. Corresponding to different sound effect triggering events, a target sound effect is preset. The sound effects can be stored in the terminal equipment in the form of audio in advance.
Step S104, acquiring the visibility state of the target object; wherein the visibility state is to: indicating whether the target object is viewable by the controlled virtual object or virtual objects other than the controlled virtual object.
Unlike the related art in which whether or not a target object can be seen is determined only by whether or not the target object is in a scene picture taken by a virtual camera corresponding to a controlled virtual object, the visibility state of the target object is whether or not the target object can be seen by a virtual object in the entire game scene. The visibility state of the target object may be visible or invisible. Typically, the target object is visible during the game, and other virtual objects may interact with or apply skills to the target object. When the target object enters a preset hidden area in the game scene, such as a grass or other areas which can be hidden, or the target object has a hidden attribute, such as a hidden skill is triggered or a hidden prop is used, the target object cannot be seen by other virtual objects in the game scene, and at the moment, the target object cannot be seen.
When the visibility state of the target object is obtained, if the game has stealth skill or stealth props, whether the target object has stealth property at the moment can be checked firstly, and if the target object has the stealth property, the target object is invisible; if the target object does not have the stealth property or does not have stealth skills or stealth props in the game, whether the position of the target object in the game scene is located in a hidden area in the game scene needs to be checked, and if the target object is located in the hidden area, the target object is considered to be invisible. In other cases, the target object is generally considered visible.
And S106, determining a playing mode of the target sound effect and playing the target sound effect based on the visibility state of the target object and the position of the virtual camera corresponding to the controlled virtual object in the game scene.
When the visibility state of the target object is visible, the target sound effect can be played normally, the effect of the volume equivalent of the target sound effect is mainly related to the distance between the target object and the controlled virtual object, and when the Listener is bound on the virtual camera, the Listener is related to the distance between the target object and the virtual camera, and the attenuation theory of the sound wave can be specifically referred to; when the visibility state of the target object is invisible, the target sound effect of the target object cannot be heard by the controlled virtual object generally, and the target sound effect can be selected not to be played. However, when the visibility state of the target object is visible, if the distance between the target object and the virtual camera is long, the volume heard by the controlled virtual object is small, and it is difficult to acquire the corresponding information, and the target sound effect may be selected not to be played.
If the target object can move, its visibility state may change, and if its target sound effect is not played, it is difficult to make the target sound effect play at the correct time when the target object changes from invisible to visible. In the related art, the sound engine of the game provides two sound playing modes, namely a virtual sound part (also referred to as a "virtual channel") and a real sound part, the real sound part performs full-flow processing on the sound effect, including stream playing, decoding, filtering and the like, so that the sound can be normally played, and the virtual sound part calculates the volume of the sound effect, so that no actual sound is played, which is equivalent to a mute state. Therefore, the virtual sound part can be used to play the target sound effect when the target sound effect is not required to play the sound. When the target object is invisible, the sound source position of the target object can be set to be far away from the virtual camera, so that the sound effect of the target sound effect is small, and the mute state of the target sound effect is realized.
The virtual sound part is generally adopted to play a target sound effect with the volume smaller than a preset volume threshold value, and meanwhile, the virtual sound part is used for calculating the volume of the sound effect, so that when the volume of the target sound effect is higher than the volume threshold value, the target sound effect can be moved to the real sound part, and the real sound part is adopted to play the target sound effect.
And S108, responding to the change of the visibility state of the target object, updating the playing mode of the target sound effect, and continuing to play the target sound effect based on the updated playing mode.
In the playing process of the target sound effect, the visibility state of the target object can be acquired in real time, for example, whether the target object has a stealth attribute or not or whether the target object is in a hidden area of a game scene or not is determined, so that the visibility state of the target object is determined. And when the visibility state of the target object is changed, determining the playing mode of the target sound effect based on the changed visibility state. Specifically, when the target object changes from visible to invisible, the virtual sound part may be used to continue playing the target sound effect from the current time, and when the target object changes from invisible to visible, the real sound part may be used to continue playing the target sound effect from the current time.
The method for controlling the playing of the sound effect responds to a sound effect trigger event, and determines a target object for triggering the sound effect and a target sound effect to be played; acquiring the visibility state of the target object; the visibility state is used to indicate whether the target object can be seen by the controlled virtual object or other virtual objects; determining a playing mode of a target sound effect and playing the target sound effect based on the visibility state of the target object and the position of a virtual camera corresponding to the controlled virtual object in a game scene; and responding to the change of the visibility state of the target object, updating the playing mode of the target sound effect, and continuing to play the target sound effect based on the updated playing mode. In the above manner, the sound effect playing mode is determined based on the visibility state of the target object, and when the visibility changes, the sound effect playing mode is adjusted, so that the player controlling the virtual object can hear the sound effect of the visible object, thereby obtaining the game situation in the whole game scene and improving the user experience.
The embodiments described below provide an implementation to obtain a visibility state of a target object.
In order to increase the interest of the game, some hidden areas, such as grass, rivers, etc. which can hide in the game scene, are usually set; correspondingly, when the virtual object is in the hidden area, the visibility state of the virtual object is invisible, and when the virtual object is in the non-hidden area, the visibility state of the virtual object is visible.
When the visibility state of the target object is determined, the position of the target object in the game scene may be first acquired, and then whether the target object is located in the hidden area is determined based on the position of the target object, specifically, whether the position coordinate of the target object is included in the coordinate range of the hidden area may be determined; if the target object is located in the hidden area, determining the visibility state of the target object to be invisible; and if the target object is not located in the hidden area, determining the visibility state of the target object to be visible.
In addition, the target object can also launch stealth skills or adopt stealth props to enable the visible state of the target object to be invisible. In some scenarios, the controlled virtual object may also launch the skill of having "see-through eye" or the like to see the target object "invisible", and at this time, the influence of these conditions on the visibility state of the target object needs to be comprehensively considered, so as to determine the visibility state of the target object.
The following embodiments provide an implementation of determining a playback mode of a target sound effect.
In the process of sound propagation, since sound waves are attenuated along with the propagation distance, the 3d sound effect in the game can change the sound size according to the distance between the sound source position and the position of listener, and the farther the sound is, the smaller the sound is; the distance between the location of sound generation (often referred to as the "sound source location") and the listener (in this method the location of the virtual camera in the game scene) is critical in determining the playback effect of the sound. Therefore, after obtaining the visibility state of the target object, it is necessary to determine the sound source position of the target object based on the visibility state of the target object, and then determine the playing mode of the target sound effect based on the sound source position of the target object and the position of the virtual camera corresponding to the controlled virtual object in the game scene.
Specifically, when the sound source position corresponding to the target object is determined, if the visibility state of the target object is visible, the position of the target object in the game scene is determined as the sound source position corresponding to the target object; if the visibility state of the target object is invisible, determining a preset position in a game scene as a sound source position corresponding to the target object; the distance between the preset position and the position of the virtual camera corresponding to the controlled virtual object in the game scene is larger than a preset distance threshold value. The distance threshold value can ensure that the attenuation of the target sound effect is large, so that the volume is small, the target sound effect is moved to the virtual sound part to be played, and the target sound effect of the invisible target object is not heard. When a target object in a game moves, the sound source position of the sound effect needs to be synchronously updated at the same time so as to keep consistency with the position of the target object, or when the target object is invisible, the sound source position is set to a preset position.
That is, by judging the visibility of the game object where the sound effect is present, if in the visible state, the sound source position thereof is normally set, and at this time, the sound effect is normally audible. In the invisible state, the sound source position is set to a position far from listener, and the sound effect is not audible. The following programming languages can be specifically adopted for implementation:
Figure BDA0003596536650000121
after the sound source position of the target object is determined, the volume parameter of the target sound effect can be determined based on the sound source position of the target object and the position of the virtual camera corresponding to the controlled virtual object in the game scene; the volume parameter can represent the influence of the distance between the sound source position and the virtual object on the volume of the target sound effect, and is usually an attenuation coefficient; and further, the playing mode of the target sound effect can be determined based on the volume parameter of the target sound effect.
When determining the attenuation coefficient of the target sound effect, calculating the distance between the position of the sound source and the position of a virtual camera corresponding to the controlled virtual object in a game scene; determining the attenuation coefficient of the target sound effect based on the predetermined attenuation curve of the target sound effect and the distance between the sound source position and the listener position; the attenuation curve indicates a trend of the volume of the target sound effect with the distance between the sound source position of the target sound effect and the listener position.
Here, a relative distance between a sound source position (generally, a position of a 3D sounding body) and a virtual camera (Listener) may also be referred to as a damping distance, and the sound intensity may be calculated based on the damping distance and a damping curve. A conventional screen ratio of 16:9 can be set as required, with a distance of 1000 yards from the center point (player position) to the edge of the screen, an audio attenuation distance ratio of 1:10, and a distance of 100 units converted into a wwise audio engine. And different attenuation distance ranges are set according to different sound types. The following 5 categories are mainly classified: 3D ambient sound (maximum range 200 units); general sound effect (maximum range 200 units); hero character skill sound effect (maximum range 200 units); hero character step sound effect (maximum range 120 units); monster sound effects (maximum range 200 units).
For example: as shown in fig. 2, in the most important skill sound effect attenuation curve, it can be seen that the volume decrease in the curve of 0-100 range is relatively slow, so as to ensure that the player can hear the complete and clear skill sound effect in the screen range and not be covered by the sound at a distance. Within the range of 100-140, the sound changes are more obvious, the auditory habits are met and the player can receive the nearby sound information. In the range of 140-.
When the playing mode of the target sound effect is determined, the volume of the target sound effect is determined based on the volume parameter, specifically, the initial volume of the target sound effect can be multiplied by the attenuation coefficient, so that the volume of the target sound effect when played is obtained; judging whether the volume of the target sound effect is greater than or equal to a preset volume threshold value or not; if not, determining the playing mode of the target sound effect as follows: playing a target sound effect through a virtual sound part of a preset sound engine; if yes, the playing mode of the target sound effect is determined as follows: and playing the target sound effect through a real sound part of a preset sound engine.
Generally, the target sound effect comprises a plurality of sound effects; before the step of playing the target sound effect through the real sound part of the preset sound engine, whether the number of the target sound effect is smaller than or equal to a preset sound number threshold value needs to be judged; if not, determining the playing mode of the target sound effect with the lowest preset priority in the target sound effects as follows: playing a target sound effect through a virtual sound part of a preset sound engine; and then judging whether the number of the target sound effects is smaller than or equal to a preset sound number threshold value or not until the number of the target sound effects is equal to the sound number threshold value. Namely, when the number of the target sound effects needing to be played through the real sound part is larger than the sound number threshold value, the target sound effect with lower priority is moved to the virtual sound part for playing.
In the method, when the sound effect needs to be played, the playing event, namely the global playing sound effect event, is sent to the sound engine regardless of the visibility of the game object. Compared with the related art, the control of the current client on the visibility judgment during playing needs to be removed. And further processing is required for the sound effect event of the invisible object to be inaudible, such as setting the sound source position thereof to a position distant from listener as described above.
When many sounds are played simultaneously or sequentially, performance resources of the machine are consumed. It is necessary to open the setting of the virtual channel in the sound engine, and to let part of the sound enter the virtual channel by the setting rule. Meanwhile, it is also required to ensure that when the sound returns to the real sound part from the virtual channel, the sound is matched with the time line, and the phenomenon that sound and pictures are not synchronous is avoided.
Moving the sound effect to the virtual part has two different implementation modes:
1. when the volume of the sound part is lower than a preset threshold value, the sound part can be moved into the virtual channel, and the sound is not processed by the sound engine, so that the performance consumption is reduced.
2. When a certain sound or a certain class of sounds are played simultaneously and exceed the preset maximum playing number, the sounds with low priority are moved to the virtual channel according to the set priority value.
At the beginning of design, special classification is made for different machine performances. The number of sounds processed by the low-performance machines is small, so that the global maximum playing number is set to be 30, and the volume threshold value is set to be-40 db, namely, in the whole game, at most 30 sounds can be played simultaneously, when the limit is exceeded, the content with low priority is moved to the virtual channel, and meanwhile, the content with the volume lower than-40 db is also moved to the virtual channel.
The priority classification (1-100) of sound classifies and prioritizes sounds existing in the game according to the importance degree of the sounds, and sounds with low priority can be more easily moved to a virtual channel to be hidden, and the channel is left to more important sound content. For example, it is possible to separately set: UI button sound effects: 80; system voice broadcast: 80; summoning skill: 70; great skill of yingxiong province: 60, adding a solvent to the mixture; hero little skill: 55; common hero attack, universal skills: 50; hero moving footstep sound: 40. therefore, the priorities of various sound effects are distinguished, and the sound effect with lower priority enters the virtual sound part when more sound is to be played.
The above-mentioned playing mode can produce the following sound playing effect:
example scenario 1: in a battle field, a 1P player (himself) is in a down-route position and an enemy or teammate is in an up-route position. At this time, the distance between the character (sound producing body) of the uplink and the Listener is far beyond the maximum range of 200 yards, so all the 3D sound of the character of the uplink is moved to the virtual channel, so as to save performance.
Example scenario 2: in a group battle, a plurality of players release skills at the same time, and the maximum number of plays reaches a peak 40. In order to ensure the experience of the game, important sounds such as character recruitment, system broadcasting, UI sound effects, skill of a summons teacher need to be reserved. Low priority sounds (unimportant sounds) will be moved to virtual parts such as footsteps, general attack and attack sounds, death or city-back sounds, small skill voices, etc.
For the above method embodiment, referring to the playing control device of sound effect shown in fig. 3, a graphical user interface is provided through a terminal device, the graphical user interface includes a scene picture of a part of game scene, and the scene picture is obtained by shooting in the game scene by a virtual camera corresponding to a controlled virtual object; the device includes:
a target sound effect determining module 302, configured to determine, in response to a sound effect trigger event, a target object that triggers a sound effect and a target sound effect to be played;
a visibility state acquiring module 304, configured to acquire a visibility state of the target object; wherein the visibility state is to: indicating whether the target object is viewable by the controlled virtual object or a virtual object other than the controlled virtual object;
a playing mode determining module 306, configured to determine a playing mode of the target sound effect based on the visibility state of the target object and the position of the virtual camera corresponding to the controlled virtual object in the game scene, and play the target sound effect;
the updating module 308 is configured to update the playing mode of the target sound effect in response to a change in the visibility state of the target object, and continue to play the target sound effect based on the updated playing mode.
The sound effect playing control device responds to a sound effect trigger event and determines a target object for triggering the sound effect and a target sound effect to be played; acquiring the visibility state of the target object; the visibility state is used to indicate whether the target object can be seen by the controlled virtual object or other virtual objects; determining a playing mode of a target sound effect and playing the target sound effect based on the visibility state of the target object and the position of a virtual camera corresponding to the controlled virtual object in a game scene; and responding to the change of the visibility state of the target object, updating the playing mode of the target sound effect, and continuing to play the target sound effect based on the updated playing mode. In the above manner, the sound effect playing mode is determined based on the visibility state of the target object, and when the visibility changes, the sound effect playing mode is adjusted, so that the player controlling the virtual object can hear the sound effect of the visible object, thereby obtaining the game situation in the whole game scene and improving the user experience.
The game scene comprises a hidden area; visibility states include visible or invisible; the visibility state acquisition module is further to: acquiring the position of a target object in a game scene; judging whether the target object is located in the hidden area or not based on the position of the target object; if the target object is located in the hidden area, determining that the visibility state of the target object is invisible; and if the target object is not located in the hidden area, determining the visibility state of the target object to be visible.
The playing mode determining module is further configured to: determining the sound source position of the target object based on the visibility state of the target object; and determining the playing mode of the target sound effect based on the sound source position of the target object and the position of the virtual camera corresponding to the controlled virtual object in the game scene.
The visibility state includes visible or invisible; the playing mode determining module is further configured to: if the visibility state of the target object is visible, determining the position of the target object in the game scene as the sound source position corresponding to the target object; if the visibility state of the target object is invisible, determining a preset position in the game scene as a sound source position corresponding to the target object; the distance between the preset position and the position of the virtual camera corresponding to the controlled virtual object in the game scene is larger than a preset distance threshold value.
The playing mode determining module is further configured to: determining a volume parameter of a target sound effect based on the sound source position of the target object and the position of a virtual camera corresponding to the controlled virtual object in a game scene; the volume parameter represents the influence of the distance between the sound source position and the virtual object on the volume of the target sound effect; and determining the playing mode of the target sound effect based on the volume parameter of the target sound effect.
The volume parameter comprises an attenuation coefficient; the playing mode determining module is further configured to: calculating the distance between the positions of the sound source and the positions of the virtual cameras corresponding to the controlled virtual objects in the game scene; determining the attenuation coefficient of the target sound effect based on the predetermined attenuation curve of the target sound effect and the distance between the sound source position and the listener position; the attenuation curve indicates a trend of the volume of the target sound effect with the distance between the sound source position of the target sound effect and the listener position.
The playing mode determining module is further configured to: determining the volume of the target sound effect based on the volume parameter; judging whether the volume of the target sound effect is larger than or equal to a preset volume threshold value or not; if not, determining the playing mode of the target sound effect as follows: playing a target sound effect through a virtual sound part of a preset sound engine; if yes, the playing mode of the target sound effect is determined as follows: and playing the target sound effect through a preset real sound part of the sound engine.
The target sound effect comprises a plurality of sound effects; the above-mentioned device still includes: the quantity judging module is used for judging whether the quantity of the target sound effect is less than or equal to a preset sound quantity threshold value or not; a priority playing mode determining module, configured to determine, if the target sound effect is not the lowest priority, the playing mode of the target sound effect with the lowest priority as follows: playing a target sound effect through a virtual sound part of a preset sound engine; and continuing to execute the step of judging whether the number of the target sound effects is less than or equal to a preset sound number threshold value until the number of the target sound effects is equal to the sound number threshold value.
The present embodiment further provides an electronic device, including a processor and a memory, where the memory stores machine executable instructions capable of being executed by the processor, and the processor executes the machine executable instructions to implement the playback control method for the sound effects, for example:
as shown in fig. 1, in response to a sound effect triggering event, determining a target object triggering a sound effect and a target sound effect to be played; acquiring a visibility state of a target object; wherein the visibility state is to: indicating whether the target object is viewable by the controlled virtual object or a virtual object other than the controlled virtual object; determining a playing mode of a target sound effect and playing the target sound effect based on the visibility state of the target object and the position of a virtual camera corresponding to the controlled virtual object in a game scene; and responding to the change of the visibility state of the target object, updating the playing mode of the target sound effect, and continuing to play the target sound effect based on the updated playing mode.
Optionally, the game scene includes a hidden area; visibility states include visible or invisible; the step of obtaining the visibility state of the target object comprises the following steps: acquiring the position of a target object in a game scene; judging whether the target object is located in the hidden area or not based on the position of the target object; if the target object is located in the hidden area, determining the visibility state of the target object to be invisible; and if the target object is not located in the hidden area, determining the visibility state of the target object to be visible.
Optionally, the step of determining the playing mode of the target sound effect based on the visibility state of the target object and the position of the virtual camera corresponding to the controlled virtual object in the game scene includes: determining the sound source position of the target object based on the visibility state of the target object; and determining the playing mode of the target sound effect based on the sound source position of the target object and the position of the virtual camera corresponding to the controlled virtual object in the game scene.
Optionally, the visibility state includes visible or invisible; the step of determining the sound source position corresponding to the target object based on the visibility state of the target object comprises the following steps: if the visibility state of the target object is visible, determining the position of the target object in the game scene as the sound source position corresponding to the target object; if the visibility state of the target object is invisible, determining a preset position in the game scene as a sound source position corresponding to the target object; the distance between the preset position and the position of the virtual camera corresponding to the controlled virtual object in the game scene is larger than a preset distance threshold value.
Optionally, the step of determining the playing mode of the target sound effect based on the sound source position of the target object and the position of the virtual camera corresponding to the controlled virtual object in the game scene includes: determining a volume parameter of a target sound effect based on the sound source position of the target object and the position of a virtual camera corresponding to the controlled virtual object in a game scene; the volume parameter represents the influence of the distance between the sound source position and the virtual object on the volume of the target sound effect; and determining the playing mode of the target sound effect based on the volume parameter of the target sound effect.
Optionally, the volume parameter includes an attenuation coefficient; the method comprises the following steps of determining volume parameters of a target sound effect based on the sound source position of a target object and the position of a virtual camera corresponding to a controlled virtual object in a game scene, wherein the steps comprise: calculating the distance between the positions of the sound source and the positions of the virtual cameras corresponding to the controlled virtual objects in the game scene; determining the attenuation coefficient of the target sound effect based on the predetermined attenuation curve of the target sound effect and the distance between the sound source position and the listener position; the attenuation curve indicates a trend of the volume of the target sound effect with the distance between the sound source position of the target sound effect and the listener position.
Optionally, the step of determining the playing mode of the target sound effect based on the playing parameter of the target sound effect includes: determining the volume of the target sound effect based on the volume parameter; judging whether the volume of the target sound effect is larger than or equal to a preset volume threshold value or not; if not, determining the playing mode of the target sound effect as follows: playing a target sound effect through a virtual sound part of a preset sound engine; if yes, the playing mode of the target sound effect is determined as follows: and playing the target sound effect through a preset real sound part of the sound engine.
Optionally, the target sound effects include a plurality of sound effects; determining the playing mode of the target sound effect as follows: before the step of playing the target sound effect through the real sound part of the preset sound engine, the method further comprises the following steps: judging whether the number of the target sound effects is smaller than or equal to a preset sound number threshold value or not; if not, determining the playing mode of the target sound effect with the lowest preset priority in the target sound effects as follows: playing a target sound effect through a virtual sound part of a preset sound engine; and continuing to execute the step of judging whether the number of the target sound effects is less than or equal to a preset sound number threshold value or not until the number of the target sound effects is equal to the sound number threshold value.
The method for controlling the playing of the sound effect responds to a sound effect trigger event, and determines a target object for triggering the sound effect and a target sound effect to be played; acquiring the visibility state of the target object; the visibility state is used to indicate whether the target object can be seen by the controlled virtual object or other virtual objects; determining a playing mode of a target sound effect and playing the target sound effect based on the visibility state of the target object and the position of a virtual camera corresponding to the controlled virtual object in a game scene; and responding to the change of the visibility state of the target object, updating the playing mode of the target sound effect, and continuing to play the target sound effect based on the updated playing mode. In the above manner, the sound effect playing mode is determined based on the visibility state of the target object, and when the visibility changes, the sound effect playing mode is adjusted, so that the player controlling the virtual object can hear the sound effect of the visible object, thereby obtaining the game situation in the whole game scene and improving the user experience.
Referring to fig. 4, the electronic device includes a processor 100 and a memory 101, the memory 101 stores machine executable instructions capable of being executed by the processor 100, and the processor 100 executes the machine executable instructions to implement the playback control method of the sound effects.
Further, the electronic device shown in fig. 4 further includes a bus 102 and a communication interface 103, and the processor 100, the communication interface 103, and the memory 101 are connected through the bus 102.
The Memory 101 may include a high-speed Random Access Memory (RAM) and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 103 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like can be used. The bus 102 may be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 4, but this does not indicate only one bus or one type of bus.
Processor 100 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 100. The Processor 100 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 101, and the processor 100 reads the information in the memory 101 and completes the steps of the method of the foregoing embodiment in combination with the hardware thereof.
The embodiment also provides a machine-readable storage medium, which stores machine-executable instructions, and when the machine-executable instructions are called and executed by the processor, the machine-executable instructions cause the processor to realize the play control method of the sound effect.
The method, the device and the electronic equipment for controlling the playing of the sound effect provided by the embodiment of the invention comprise a computer readable storage medium storing program codes, wherein the program codes comprise instructions which can be used for executing the method in the previous method embodiment, for example:
as shown in fig. 1, in response to an audio effect triggering event, determining a target object triggering an audio effect and a target audio effect to be played; acquiring a visibility state of a target object; wherein the visibility state is to: indicating whether the target object is viewable by the controlled virtual object or a virtual object other than the controlled virtual object; determining a playing mode of a target sound effect and playing the target sound effect based on the visibility state of the target object and the position of a virtual camera corresponding to the controlled virtual object in a game scene; and responding to the change of the visibility state of the target object, updating the playing mode of the target sound effect, and continuing to play the target sound effect based on the updated playing mode.
Optionally, the game scene includes a hidden area; visibility states include visible or invisible; the step of obtaining the visibility state of the target object comprises the following steps: acquiring the position of a target object in a game scene; judging whether the target object is located in the hidden area or not based on the position of the target object; if the target object is located in the hidden area, determining the visibility state of the target object to be invisible; and if the target object is not located in the hidden area, determining the visibility state of the target object to be visible.
Optionally, the step of determining the playing mode of the target sound effect based on the visibility state of the target object and the position of the virtual camera corresponding to the controlled virtual object in the game scene includes: determining the sound source position of the target object based on the visibility state of the target object; and determining the playing mode of the target sound effect based on the sound source position of the target object and the position of the virtual camera corresponding to the controlled virtual object in the game scene.
Optionally, the visibility state includes visible or invisible; the step of determining the position of the sound source corresponding to the target object based on the visibility state of the target object comprises the following steps: if the visibility state of the target object is visible, determining the position of the target object in the game scene as the sound source position corresponding to the target object; if the visibility state of the target object is invisible, determining a preset position in the game scene as a sound source position corresponding to the target object; the distance between the preset position and the position of the virtual camera corresponding to the controlled virtual object in the game scene is larger than a preset distance threshold value.
Optionally, the step of determining the playing mode of the target sound effect based on the sound source position of the target object and the position of the virtual camera corresponding to the controlled virtual object in the game scene includes: determining a volume parameter of a target sound effect based on the sound source position of the target object and the position of a virtual camera corresponding to the controlled virtual object in a game scene; the volume parameter represents the influence of the distance between the sound source position and the virtual object on the volume of the target sound effect; and determining the playing mode of the target sound effect based on the volume parameter of the target sound effect.
Optionally, the volume parameter includes an attenuation coefficient; the method comprises the following steps of determining volume parameters of a target sound effect based on the sound source position of a target object and the position of a virtual camera corresponding to a controlled virtual object in a game scene, wherein the steps comprise: calculating the distance between the positions of the sound source and the positions of the virtual cameras corresponding to the controlled virtual objects in the game scene; determining the attenuation coefficient of the target sound effect based on the predetermined attenuation curve of the target sound effect and the distance between the sound source position and the listener position; the attenuation curve indicates a trend of the volume of the target sound effect with the distance between the sound source position of the target sound effect and the listener position.
Optionally, the step of determining the playing mode of the target sound effect based on the playing parameter of the target sound effect includes: determining the volume of the target sound effect based on the volume parameter; judging whether the volume of the target sound effect is larger than or equal to a preset volume threshold value or not; if not, determining the playing mode of the target sound effect as follows: playing a target sound effect through a virtual sound part of a preset sound engine; if yes, the playing mode of the target sound effect is determined as follows: and playing the target sound effect through a real sound part of a preset sound engine.
Optionally, the target sound effects include a plurality of sound effects; determining the playing mode of the target sound effect as follows: before the step of playing the target sound effect through the real sound part of the preset sound engine, the method further comprises the following steps: judging whether the number of the target sound effects is smaller than or equal to a preset sound number threshold value or not; if not, determining the playing mode of the target sound effect with the lowest preset priority in the target sound effects as follows: playing a target sound effect through a virtual sound part of a preset sound engine; and continuing to execute the step of judging whether the number of the target sound effects is less than or equal to a preset sound number threshold value until the number of the target sound effects is equal to the sound number threshold value.
The method for controlling the playing of the sound effect responds to a sound effect trigger event, and determines a target object for triggering the sound effect and a target sound effect to be played; acquiring the visibility state of the target object; the visibility state is used to indicate whether the target object can be seen by the controlled virtual object or other virtual objects; determining a playing mode of a target sound effect and playing the target sound effect based on the visibility state of the target object and the position of a virtual camera corresponding to the controlled virtual object in a game scene; and responding to the change of the visibility state of the target object, updating the playing mode of the target sound effect, and continuing to play the target sound effect based on the updated playing mode. In the above manner, the sound effect playing mode is determined based on the visibility state of the target object, and when the visibility changes, the sound effect playing mode is adjusted, so that the player controlling the virtual object can hear the sound effect of the visible object, thereby obtaining the game situation in the whole game scene and improving the user experience. .
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working process of the system and the apparatus described above may refer to the corresponding process in the foregoing method embodiment, and details are not described herein again.
In addition, in the description of the embodiments of the present invention, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present invention can be understood in specific cases for those skilled in the art.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, those skilled in the art will understand that the following embodiments are merely illustrative of the present invention, and not restrictive, and the scope of the present invention is not limited thereto: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (11)

1. A playing control method of sound effect is characterized in that a graphical user interface is provided through terminal equipment, the graphical user interface comprises scene pictures of part of game scenes, and the scene pictures are obtained by shooting in the game scenes through virtual cameras corresponding to controlled virtual objects; the method comprises the following steps:
responding to a sound effect trigger event, and determining a target object for triggering a sound effect and a target sound effect to be played;
acquiring the visibility state of the target object; wherein the visibility state is to: indicating whether the target object is viewable by the controlled virtual object or a virtual object other than the controlled virtual object;
determining a playing mode of the target sound effect and playing the target sound effect based on the visibility state of the target object and the position of the virtual camera corresponding to the controlled virtual object in the game scene;
and responding to the change of the visibility state of the target object, updating the playing mode of the target sound effect, and continuing to play the target sound effect based on the updated playing mode.
2. The method of claim 1, wherein the game scene comprises a hidden area; the visibility state comprises visible or invisible;
the step of obtaining the visibility state of the target object comprises:
acquiring the position of the target object in the game scene;
judging whether the target object is located in the hidden area or not based on the position of the target object;
if the target object is located in the hidden area, determining that the visibility state of the target object is invisible;
and if the target object is not located in the hidden area, determining the visibility state of the target object to be visible.
3. The method according to claim 1, wherein the step of determining the playing mode of the target sound effect based on the visibility state of the target object and the position of the virtual camera corresponding to the controlled virtual object in the game scene comprises:
determining a sound source position of the target object based on the visibility state of the target object;
and determining the playing mode of the target sound effect based on the sound source position of the target object and the position of the virtual camera corresponding to the controlled virtual object in the game scene.
4. The method of claim 3, wherein the visibility state comprises visible or invisible;
the step of determining the position of the sound source corresponding to the target object based on the visibility state of the target object comprises the following steps:
if the visibility state of the target object is visible, determining the position of the target object in a game scene as a sound source position corresponding to the target object;
if the visibility state of the target object is invisible, determining a preset position in the game scene as a sound source position corresponding to the target object; the distance between the preset position and the position of the virtual camera corresponding to the controlled virtual object in the game scene is larger than a preset distance threshold value.
5. The method according to claim 1, wherein the step of determining the playing mode of the target sound effect based on the sound source position of the target object and the position of the virtual camera corresponding to the controlled virtual object in the game scene comprises:
determining a volume parameter of the target sound effect based on the sound source position of the target object and the position of a virtual camera corresponding to the controlled virtual object in the game scene; the volume parameter represents the influence of the distance between the sound source position and the virtual object on the volume of the target sound effect;
and determining the playing mode of the target sound effect based on the volume parameter of the target sound effect.
6. The method of claim 5, wherein the volume parameter comprises an attenuation coefficient;
determining a volume parameter of the target sound effect based on the sound source position of the target object and the position of the virtual camera corresponding to the controlled virtual object in the game scene, wherein the step comprises the following steps:
calculating the distance between the sound source position and the position of a virtual camera corresponding to the controlled virtual object in the game scene;
determining the attenuation coefficient of the target sound effect based on the predetermined attenuation curve of the target sound effect and the distance between the sound source position and the listener position; the attenuation curve indicates the variation trend of the volume of the target sound effect along with the distance between the sound source position of the target sound effect and the position of a listener.
7. The method according to claim 5, wherein the step of determining the playing mode of the target sound effect based on the playing parameters of the target sound effect comprises:
determining the volume of the target sound effect based on the volume parameter;
judging whether the volume of the target sound effect is larger than or equal to a preset volume threshold value or not;
if not, determining the playing mode of the target sound effect as follows: playing the target sound effect through a virtual sound part of a preset sound engine;
if yes, determining the playing mode of the target sound effect as follows: and playing the target sound effect through a real sound part of a preset sound engine.
8. The method of claim 7, wherein the target sound effect comprises a plurality;
and determining the playing mode of the target sound effect as follows: before the step of playing the target sound effect through the real sound part of the preset sound engine, the method further comprises the following steps:
judging whether the number of the target sound effects is smaller than or equal to a preset sound number threshold value or not;
if not, determining the playing mode of the target sound effect with the lowest preset priority in the target sound effects as follows: playing the target sound effect through a virtual sound part of a preset sound engine; and continuing to execute the step of judging whether the number of the target sound effects is less than or equal to a preset sound number threshold value until the number of the target sound effects is equal to the sound number threshold value.
9. A playing control device of sound effect is characterized in that a terminal device provides a graphical user interface, the graphical user interface comprises a scene picture of a part of game scene, and the scene picture is shot in the game scene by a virtual camera corresponding to a controlled virtual object; the device comprises:
the target sound effect determining module is used for responding to the sound effect triggering event and determining a target object triggering the sound effect and the target sound effect to be played;
a visibility state obtaining module, configured to obtain a visibility state of the target object; wherein the visibility state is to: indicating whether the target object is viewable by the controlled virtual object or a virtual object other than the controlled virtual object;
a playing mode determining module, configured to determine a playing mode of the target sound effect based on a visibility state of the target object and a position of a virtual camera corresponding to the controlled virtual object in the game scene, and play the target sound effect;
and the updating module is used for responding to the change of the visibility state of the target object, updating the playing mode of the target sound effect and continuously playing the target sound effect based on the updated playing mode.
10. An electronic device comprising a processor and a memory, the memory storing machine executable instructions executable by the processor, the processor executing the machine executable instructions to implement the method of playback control of audio effects of any of claims 1-8.
11. A machine-readable storage medium having stored thereon machine-executable instructions which, when invoked and executed by a processor, cause the processor to implement the method for playback control of sound effects of any of claims 1-8.
CN202210393780.5A 2022-04-14 2022-04-14 Sound effect playing control method and device and electronic equipment Pending CN114887327A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210393780.5A CN114887327A (en) 2022-04-14 2022-04-14 Sound effect playing control method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210393780.5A CN114887327A (en) 2022-04-14 2022-04-14 Sound effect playing control method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN114887327A true CN114887327A (en) 2022-08-12

Family

ID=82717229

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210393780.5A Pending CN114887327A (en) 2022-04-14 2022-04-14 Sound effect playing control method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN114887327A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024055558A1 (en) * 2022-09-15 2024-03-21 网易(杭州)网络有限公司 Interaction control method and apparatus for audio playback, and electronic device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024055558A1 (en) * 2022-09-15 2024-03-21 网易(杭州)网络有限公司 Interaction control method and apparatus for audio playback, and electronic device

Similar Documents

Publication Publication Date Title
US9564866B2 (en) Method and device for prioritizing audio delivery in an application
WO2019153840A1 (en) Sound reproduction method and device, storage medium and electronic device
US9403088B2 (en) Method of controlling computer device, storage medium, and computer device
CN111770356B (en) Interaction method and device based on live game
US9327195B2 (en) Accommodating latency in a server-based application
JPWO2005107903A1 (en) Electronic game device, data processing method in electronic game device, program therefor, and storage medium
CN112245921B (en) Virtual object control method, device, equipment and storage medium
KR102645535B1 (en) Virtual object control method and apparatus in a virtual scene, devices and storage media
KR20160075661A (en) Variable audio parameter setting
WO2022068452A1 (en) Interactive processing method and apparatus for virtual props, electronic device, and readable storage medium
JP2006230578A (en) Program, information storage medium and game apparatus
CN110860087B (en) Virtual object control method, device and storage medium
CN112295228B (en) Virtual object control method and device, electronic equipment and storage medium
JP2022532305A (en) How to display virtual scenes, devices, terminals and computer programs
JP2022538204A (en) Information display method, device, equipment and program
CN108939535B (en) Sound effect control method and device for virtual scene, storage medium and electronic equipment
CN114344892A (en) Data processing method and related device
CN114887327A (en) Sound effect playing control method and device and electronic equipment
CN111773702A (en) Control method and device for live game
WO2024108944A1 (en) Sound control method and apparatus in game, and electronic device
CN112221135B (en) Picture display method, device, equipment and storage medium
JP2023174714A (en) Program, image generation apparatus, and image generation method
CN111265867A (en) Method and device for displaying game picture, terminal and storage medium
WO2023151283A1 (en) Method and apparatus for processing audio in game, and storage medium and electronic apparatus
KR20220083827A (en) Method and apparatus, terminal, and storage medium for displaying a virtual scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination