CN114404973A - Audio playing method and device and electronic equipment - Google Patents

Audio playing method and device and electronic equipment Download PDF

Info

Publication number
CN114404973A
CN114404973A CN202111632559.2A CN202111632559A CN114404973A CN 114404973 A CN114404973 A CN 114404973A CN 202111632559 A CN202111632559 A CN 202111632559A CN 114404973 A CN114404973 A CN 114404973A
Authority
CN
China
Prior art keywords
audio
sound effect
playing
played
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111632559.2A
Other languages
Chinese (zh)
Inventor
朱锐
许杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202111632559.2A priority Critical patent/CN114404973A/en
Publication of CN114404973A publication Critical patent/CN114404973A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/54Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Pinball Game Machines (AREA)

Abstract

The invention discloses a method and a device for playing audio and electronic equipment. Wherein, the method comprises the following steps: acquiring the position of a virtual character in a game scene; acquiring an audio receiving range of a virtual radio in a game scene according to the position of the virtual character, wherein the virtual radio is bound at a specific position relatively fixed with the virtual character; determining distribution information of at least one sound effect point in an audio receiving range, wherein the at least one sound effect point is preset in a game scene and is used for identifying at least one sound source position; determining audio to be played corresponding to the audio receiving range and a target playing position corresponding to the audio to be played based on the distribution information; and playing the audio to be played at the target playing position. The invention solves the technical problem of poor audio playing effect caused by low fitting degree of the placed sound source and the game scene.

Description

Audio playing method and device and electronic equipment
Technical Field
The invention relates to the technical field of audio, in particular to a method and a device for playing audio and electronic equipment.
Background
The audio effect has obvious influence on the game experience, and can improve the immersion of the player and enhance the overall experience of the game. For the mobile device, the volume, power consumption, performance and other factors of the mobile device are limited, and the good audio effect is beneficial to creating more complete and large scene experience. Therefore, adapting the game scene to develop real and natural audio effects is also becoming an important factor in influencing the game experience of the player.
In the prior art, when sound effect configuration of a game scene is performed in a game engine, the sound effect component in a scene editor is relied on for configuration. Wherein, the configuration process of the sound effect component is mainly to configure a sound effect ball. Specifically, the operator needs to drag the stereo area to which the sound effect is added at a designated position in the game scene, and adjust the size of the stereo area. For example, FIG. 1 is a schematic diagram of a "sound ball" according to the prior art. As shown in FIG. 1, the position of the game lens and the listener (virtual radio) can be contained in a "sound ball" area.
However, the configuration method of placing the "sound effect ball" in the game scene needs to add and place the "sound effect ball" in different areas according to the game map. Wherein, put the size and the position that the in-process need adjust "audio ball" repeatedly, loaded down with trivial details step has reduced the preparation efficiency of audio frequency. In addition, when the sound effect ball is placed, the sound effect ball is only placed in a regular terrain area, when an irregular terrain area (such as a river, a coastline, a cave and the like) is encountered, the sound effect ball cannot be completely attached to the irregular terrain area, and the problems of sound superposition, strong splitting feeling, unnatural listening feeling and the like can be caused due to inaccurate placement positions.
Disclosure of Invention
The embodiment of the invention provides a method and a device for playing audio and electronic equipment, which are used for at least solving the technical problem of poor audio playing effect caused by low fitting degree of a placed sound source and a game scene.
According to an aspect of an embodiment of the present invention, there is provided a method of playing audio, including: acquiring the position of a virtual character in a game scene; acquiring an audio receiving range of a virtual radio in a game scene according to the position of the virtual character, wherein the virtual radio is bound at a specific position relatively fixed with the virtual character; determining distribution information of at least one sound effect point in an audio receiving range, wherein the at least one sound effect point is preset in a game scene and is used for identifying at least one sound source position; determining audio to be played corresponding to the audio receiving range and a target playing position corresponding to the audio to be played based on the distribution information; and playing the audio to be played at the target playing position.
Further, the method for playing audio further comprises: acquiring a target position and a target orientation of a virtual character in a game scene; and determining the audio receiving range of the virtual radio according to the target position and the target orientation.
Further, the method for playing audio further comprises: acquiring a geometric shape corresponding to the distribution information; determining a target playing position of the audio to be played according to the geometric shape; and synthesizing the audio corresponding to the at least one sound effect point according to the geometric shape to obtain the audio to be played.
Further, the method for playing audio further comprises: determining the type of the sound effect point corresponding to at least one sound effect point; when the number of the types of the sound effect points is multiple, determining the geometric shape formed by the sound effect points of each type; and determining the target playing position corresponding to each type of sound effect point according to the geometric shape formed by each type of sound effect point.
Further, the method for playing audio further comprises: acquiring an audio playing range corresponding to audio to be played; detecting whether the target position of the virtual radio is within an audio playing range to obtain a detection result; and when the detection result indicates that the target position is within the audio playing range, playing the audio to be played at the target playing position.
Further, the method for playing audio further comprises: and when the detection result indicates that the target position is out of the audio playing range, stopping playing the audio to be played at the target playing position.
Further, the method for playing audio further comprises: detecting whether the target playing position is located in a preset area or not; and if the target playing position is located in the preset area and the virtual radio is located in the preset area, playing the audio to be played at the target playing position.
Further, the method for playing audio further comprises: and if the target playing position is located in the preset area and the virtual radio is located outside the preset area, stopping playing the audio to be played at the target playing position.
Further, the method for playing audio further comprises: before determining the distribution information of at least one sound effect point, acquiring the current sound effect point quantity corresponding to at least one virtual component in a game scene and the distribution density of at least one sound effect point on at least one virtual component; and setting at least one sound effect point on at least one virtual component according to the number and the distribution density of the current sound effect points, and establishing an association relationship between the at least one sound effect point and the at least one virtual component.
Further, the method for playing audio further comprises: determining the maximum sound effect point number and the maximum distribution density corresponding to the current virtual component, wherein the current virtual component is any one of at least one virtual component; when the number of the current sound effect points is larger than or equal to the maximum number of the sound effect points and/or the distribution density is larger than or equal to the maximum distribution density, setting at least one sound effect point on the current virtual component according to the maximum number of the sound effect points and the maximum distribution density; when the number of current sound effect points is smaller than the number of maximum sound effect points and the distribution density is smaller than the maximum distribution density, at least one sound effect point is set on the current virtual assembly according to the number of current sound effect points and the distribution density.
Further, the method for playing audio further comprises: when the change of the geometric information of at least one virtual component is detected, acquiring the changed target geometric information; and adjusting the current sound effect point number and/or the distribution density of at least one sound effect point of at least one virtual component based on the target geometric information.
According to another aspect of the embodiments of the present invention, there is also provided an apparatus for playing audio, including: the acquisition module is used for acquiring the audio receiving range of the virtual object in the game scene; the first determining module is used for determining the distribution information of at least one sound effect point in the audio receiving range, wherein the at least one sound effect point is used for identifying at least one sound source position of the audio to be played; the second determining module is used for determining the audio to be played corresponding to the audio receiving range and the target playing position corresponding to the audio to be played based on the distribution information; and the playing module is used for playing the audio to be played at the target playing position.
According to another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to execute the above-mentioned method for playing audio when running.
According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to carry out a method for running the programs, wherein the programs are arranged to perform the method for playing audio when run.
In the embodiment of the invention, a mode of determining the audio to be played and the target playing position of the audio to be played according to the audio receiving range of the virtual radio is adopted, the position of the virtual character in the game scene is firstly obtained, then the audio receiving range of the virtual radio in the game scene is obtained according to the position of the virtual character, the distribution information of at least one sound effect point in the audio receiving range is determined, the audio to be played corresponding to the audio receiving range and the target playing position corresponding to the audio to be played are determined based on the distribution information, and the audio to be played is played at the target playing position. The virtual radio is bound at a specific position fixed relative to the virtual character, and at least one sound effect point is preset in the game scene and is used for identifying at least one sound source position.
According to the content, in the application, the target playing position of the audio to be played can be determined only by determining the distribution situation of the plurality of sound effect points. Because the 'sound effect balls' are not required to be configured in the application, the audio effect of the game scene does not depend on the number, the placing position and the placing range of the 'sound effect balls', so that the problem of complicated steps in the 'sound effect ball' configuration process is avoided, and the configuration efficiency of the audio to be played is improved. In addition, this application can not receive the influence in topography area when generating virtual audio point, no matter whether the topography area is rule, audio point can all realize with the regional matching of topography to it is unified, complete, continuous to have ensured the audio effect of final generation, and has the sense of direction more, and then has promoted game player's gaming experience.
Therefore, through the technical scheme of the application, the purpose of improving the audio playing effect is achieved, the effect of improving the configuration efficiency of the audio to be played is achieved, and the technical problem that the audio playing effect is poor due to the fact that the placed sound source is low in fitting degree with the game scene is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a schematic diagram of a "sound ball" according to the prior art;
FIG. 2 is a schematic illustration of a game scenario according to the prior art;
FIG. 3 is a schematic diagram of a method of playing audio according to the prior art;
FIG. 4 is a schematic diagram of a method of playing audio according to the prior art;
FIG. 5 is a schematic diagram of a method of playing audio according to the prior art;
FIG. 6 is a schematic diagram of a method of playing audio according to the prior art;
FIG. 7 is a flow diagram of a method of playing audio according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of an alternative method of playing audio in accordance with embodiments of the present invention;
FIG. 9 is a diagram illustrating an alternative binding sound effect point, according to an embodiment of the present invention;
FIG. 10 is a diagram illustrating an alternative binding sound effect point, according to an embodiment of the present invention;
FIG. 11 is a schematic diagram of an alternative method of playing audio in accordance with embodiments of the present invention;
FIG. 12 is a flow diagram of an alternative configuration of the ambiance component in accordance with embodiments of the present invention;
FIG. 13 is a schematic diagram of an alternative Ambience component setup interface, according to embodiments of the present invention;
FIG. 14 is a schematic illustration of an alternative operating interface for the ambiance component in accordance with embodiments of the present invention;
FIG. 15 is a schematic illustration of an alternative operating interface for the ambiance component in accordance with embodiments of the present invention;
FIG. 16 is a schematic illustration of an alternative operating interface for the ambiance component in accordance with embodiments of the present invention;
FIG. 17 is a schematic illustration of an alternative operating interface for the ambiance component in accordance with embodiments of the present invention;
FIG. 18 is a schematic illustration of an alternative operating interface for the ambiance component in accordance with embodiments of the present invention;
FIG. 19 is a schematic illustration of an alternative Ambience component operating interface, according to embodiments of the present invention;
FIG. 20 is a schematic illustration of an alternative operating interface for the ambiance component in accordance with embodiments of the present invention;
FIG. 21 is a diagram illustrating an alternative binding sound effect point, according to an embodiment of the present invention;
FIG. 22 is a flow chart of an alternative binding sound effect point according to an embodiment of the present invention;
fig. 23 is a schematic diagram of an alternative apparatus for playing audio according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
In accordance with an embodiment of the present invention, there is provided an embodiment of a method of playing audio, it is noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than that presented herein.
In addition, it should be noted that, an audio effect synthesis system can be used as an execution subject of the method for playing audio in the present application.
Fig. 7 is a flowchart of a method of playing audio according to an embodiment of the present invention, as shown in fig. 7, the method includes the steps of:
step S702, the position of the virtual character in the game scene is obtained.
In step S702, the virtual character may be a virtual character in the game, for example, after the player enters the game, one of the game characters in the game may be selected, and the game character may move in the game scene. The sound effect synthesis system can monitor the position of the virtual character in the game scene in real time, for example, the sound effect synthesis system can acquire the position of the virtual character in the game scene through the virtual camera, and determine the coordinate of the position of the virtual character by combining the world coordinate in the game scene. In addition, the game character may be a virtual model such as a virtual monster, a virtual building, a virtual vehicle, or a virtual river in the game.
Step S704, acquiring the audio receiving range of the virtual radio in the game scene according to the position of the virtual character. In step S704, the virtual radio is bound to a specific location fixed relative to the virtual character. The virtual radio may be represented by a listener, who may or may not follow the virtual character in the game. The position of the virtual radio and the position of the virtual character can be the same or different. For example, in a game, listener may be mounted on a virtual character so as to move on a map in the form of a first person's scale following the virtual character, and the audio reception range of listener is also constantly changing as the virtual character moves. In addition, listener can also enter the game in the form of an observer (namely, a third person) by hanging any point around the virtual character without following any virtual character, so that when listener moves to any place on a map, the sound effect synthesis system can generate a corresponding audio receiving range according to the current position of listener. In addition, the audio receiving range is a range of an area where listener can receive audio, and may be a range of any shape, for example, a circular audio receiving range is generated according to a preset radius with the position of listener as a center.
Step S706, determining the distribution information of at least one sound effect point in the audio receiving range.
In step S706, at least one sound effect point is previously set in the game scene and used to identify at least one sound source position. Optionally, the sound effect points may be bound to the terrain in the game map, or may be bound to the specific model. Since many ambient sound effects are terrain-dependent, the sound effect synthesis system can bind sound effect points directly to the terrain. The terrain may be a regular-shaped terrain such as a rectangular grassland, or an irregular-shaped terrain such as a river or a waterfall. Furthermore, the sound effect points can be bound on a specific model in a game scene, such as a beach, a house and the like, so that the sound effect points can have a flexible spatial position, and the sound effect points can update the position along with the movement of the specific model. In addition, the sound effect point (i.e. the sound source position) does not render the sound effect, i.e. the sound effect point itself does not play the sound.
According to the method and the device, the sound effect points can be arranged on the irregular terrain area, so that the problem of poor audio playing effect caused by the fact that the sound effect ball cannot be completely attached to the irregular terrain area is solved, and the overall audio playing effect in a game scene is improved.
Step S708, determining the audio to be played corresponding to the audio receiving range and the target playing position corresponding to the audio to be played based on the distribution information.
In step S708, the sound effect synthesis system may identify the sound location in the game scene by adding a sound effect point instead of the "sound effect ball". Different from the method of 'sound effect ball', after the sound position is marked by the sound effect point, only one sound effect exists in the whole game scene. Specifically, as shown in fig. 8, during the sound effect rendering, the sound effect synthesizing system may determine the audio receiving range corresponding to Listener comprehensively according to the position of Listener and the orientation of Listener, and determine a plurality of sound effect points within the audio receiving range according to the audio receiving range. As shown in fig. 8, a plurality of sound effect points are distributed at each position of the stream, each sound effect point corresponds to an actual listening area, on this basis, the sound effect synthesis system can select at least one sound effect point for synthesizing the audio to be played by analyzing distribution information of the plurality of sound effect points, and determine a target playing position corresponding to the audio to be played according to the distribution position between the at least one sound effect point. In addition, it should be noted that the target playing position is different from the sound source position identified by the sound effect point, and the target playing position can be determined by the geometric shape formed among the sound source positions.
It should be noted that, unlike the "sound ball" approach, after the sound position is identified by the sound point, after listener enters the audio reception range, only one sound effect of the audio to be played is in the audio reception range. Therefore, the problem that in the prior art, when the listener is located in the middle of two sound effect balls, the sound generated by a plurality of sound source points can be heard is solved, and the effect of improving the game experience of the user is realized.
Step S710, playing the audio to be played at the target playing position.
In step S710, as shown in fig. 8, after determining the target playing position to be played, the sound effect synthesizing system may determine an audio playing range corresponding to the audio to be played according to the target playing position, for example, generate the audio playing range corresponding to the audio to be played according to a preset playing radius with the target playing position as a center of a circle, further, the sound effect synthesizing system may detect the target position of the virtual object listener in real time, and determine whether the target position is within the audio playing range, and when the target position is detected to be within the audio playing range, the sound effect synthesizing system may play the audio to be played at the target playing position.
Based on the contents of the above steps S702 to S710, in the embodiment of the present invention, a manner of determining the audio to be played and the target playing position of the audio to be played according to the audio receiving range of the virtual radio is adopted, the position of the virtual character in the game scene is first obtained, then the audio receiving range of the virtual radio in the game scene is obtained according to the position of the virtual character, and the distribution information of at least one sound effect point in the audio receiving range is determined, so that the audio to be played corresponding to the audio receiving range and the target playing position corresponding to the audio to be played are determined based on the distribution information, and the audio to be played is played at the target playing position. The virtual radio is bound at a specific position fixed relative to the virtual character, and at least one sound effect point is preset in the game scene and is used for identifying at least one sound source position.
According to the content, in the application, the target playing position of the audio to be played can be determined only by determining the distribution situation of the plurality of sound effect points. Because the 'sound effect balls' are not required to be configured in the application, the audio effect of the game scene does not depend on the number, the placing position and the placing range of the 'sound effect balls', so that the problem of complicated steps in the 'sound effect ball' configuration process is avoided, and the configuration efficiency of the audio to be played is improved. In addition, this application can not receive the influence in topography area when generating virtual audio point, no matter whether the topography area is rule, audio point can all realize with the regional matching of topography to it is unified, complete, continuous to have ensured the audio effect of final generation, and has the sense of direction more, and then has promoted game player's gaming experience.
Therefore, through the technical scheme of the application, the purpose of improving the audio playing effect is achieved, the effect of improving the configuration efficiency of the audio to be played is achieved, and the technical problem that the audio playing effect is poor due to the fact that the placed sound source is low in fitting degree with the game scene is solved.
In an alternative embodiment, the sound effect synthesis system may determine the audio receiving range of the virtual radio according to the target position and the target orientation after acquiring the target position and the target orientation of the virtual character in the game scene.
Optionally, after the virtual character enters the game scene, the sound effect synthesis system may obtain the target position and the target orientation of the virtual character in real time, and since the virtual radio is bound to a specific position relatively fixed to the virtual character, the sound effect synthesis system may generate the audio receiving range of the listener based on the target position and the target orientation of the virtual character. For example, listener can hear the sound emitted 5 meters directly in front of the virtual character at the farthest distance, and can hear the sound emitted 1 meter directly behind the virtual character at the nearest distance. Where 5 meters and 1 meter in the example are virtual distances in the game scene.
Through the process, when the virtual character is in different target positions and target orientations, the audio receiving ranges corresponding to the listener are different, so that the audio feeling which is more in line with the reality can be given to the player, and the game experience of the player is improved.
In an optional embodiment, the sound effect synthesis system may obtain a geometric shape corresponding to the distribution information, and determine a target playing position of the audio to be played according to the geometric shape, so as to synthesize the audio corresponding to the at least one sound effect point according to the geometric shape, thereby obtaining the audio to be played.
Optionally, there may be a plurality of sound effect points within the audio reception range of listener, and a plurality of geometric shapes may be formed between the plurality of sound effect points. The geometric shape can be regular, such as a positive direction, a rectangle, a triangle and the like, and can also be other irregular shapes. Further, the sound effect synthesis system can determine the target playing position of the audio to be played based on the geometric shape formed by the sound effect points. For example, if there are 4 sound effect points in the audio reception range of listener, and the 4 sound effect points constitute the geometric shape of a square, on the basis of the square, the sound effect synthesis system may determine that the center position of the square is the target playback position of the audio to be played. In addition, after the target playing position is determined, the sound effect synthesis system synthesizes the audio corresponding to the plurality of sound effect points according to the geometric shape, so that the audio to be played is obtained. For example, in the above example, 4 sound effect points correspond to audio 1, audio 2, audio 3, and audio 4, respectively, and the sound effect synthesizing system synthesizes audio 1, audio 2, audio 3, and audio 4 into one audio to be played in combination with the corresponding distance information and orientation information in the geometric shape.
Through the process, no matter where the listener is located at any position in the audio receiving range, the listener can hear the sound of the audio to be played from the target playing position, so that the problem that the sound generated by a plurality of sound source points can be heard when the listener is located at the middle position of two sound effect balls in the prior art is solved, and the game experience of game players is improved.
In an optional embodiment, the sound effect synthesis system may determine the sound effect point types corresponding to at least one sound effect point, and determine the geometric shape formed by the sound effect points of each type when the number of the sound effect point types is multiple, so as to determine the target playing position corresponding to the sound effect point of each type according to the geometric shape formed by the sound effect points of each type.
Optionally, the sound effect points may correspond to different sound effect point types, for example, the sound effect point 1, the sound effect point 2, and the sound effect point 3 may be used to synthesize the sound of the leaves blown by wind, and the sound effect point 4, the sound 5, and the sound effect point 6 may be used to synthesize the sound of the river. Wherein, the sound that the leaf was blown up by the wind and the sound of river have corresponded two kinds of different sound effect point types. Further, the sound effect synthesis system can determine a target playing position A according to a geometric shape A formed by the sound effect point 1, the sound effect point 2 and the sound effect point 3, and place the audio A to be played at the target playing position A, and the audio A to be played can make a sound of the leaves blown by wind. The sound effect synthesis system can also determine a target playing position B according to a geometric shape B formed by the sound effect points 4, the sound 5 and the sound effect points 6, and place the audio B to be played at the target playing position B, and the audio B to be played can make the sound of a river.
It should be noted that by dividing the sound effect points into different sound effect point types, different types of sounds can be played at different target playing positions, so that the problem of cross playing among multiple types of sounds is avoided, and the effect of playing multiple to-be-played audios according to an actual game scene is realized.
In an optional embodiment, after acquiring an audio playing range corresponding to the audio to be played, the sound effect synthesizing system detects whether the target position of the virtual radio is within the audio playing range, obtains a detection result, and plays the audio to be played at the target playing position when the detection result indicates that the target position is within the audio playing range.
Optionally, after the listener enters the game, the sound effect synthesis system monitors the target position of the listener in real time, and when the listener enters the audio playing range of the audio to be played, the sound effect synthesis system plays the audio to be played at the target playing position, so that the listener can receive the audio to be played. In addition, the audio playback range may be a range of various shapes, for example, the audio playback range may be a range of a regular shape such as a circle, a rectangle, a triangle, or the like, or may be a range of other irregular shapes. The target playing position may be any position within the audio playing range.
In an alternative embodiment, when the detection result indicates that the target position is outside the audio playing range, the audio to be played is stopped playing at the target playing position.
Optionally, when the target position of the listener is outside the audio playing range, the sound effect synthesizing system stops playing the audio to be played at the target playing position. For example, the listener moves along with the virtual character a, when the virtual character a moves to the position B, and is within the audio playing range of the audio 1 to be played, the audio synthesis system will play the audio 1 to be played, and if the virtual character 1 continues to move to the position C, and the position C is not within the audio playing range of the audio 1 to be played, the audio synthesis system will stop playing the audio 1 to be played at this time.
In the process, whether the audio to be played in the audio playing range is played or not is determined through the target position of the listener, so that the game player can feel more close to the real auditory feeling, meanwhile, unnecessary audio resources can be avoided from being occupied, and the effect of reducing performance consumption is achieved.
In an optional embodiment, the sound effect synthesizing system further detects whether the target playing position is located in a preset area, and if the target playing position is located in the preset area and the virtual radio is located in the preset area, the audio to be played is played at the target playing position.
Alternatively, the preset area may be a limited area, for example, in a game, the limited area may be a house, the target playing position may be a position of a music box in the house, and when listener enters the house, the sound effect synthesizing system plays the audio to be played at the music box.
In an alternative embodiment, if the target playing position is located within the preset area and the virtual radio is located outside the preset area, the audio to be played is stopped playing at the target playing position.
Optionally, still taking the limited area as an example of a house, there is a music box in the house, where the position of the music box is the target playing position, and if listener does not walk into the house, the music synthesis system will not play the audio to be played.
It should be noted that, in a game such as an adventure game, sound effects are important components in the whole game, and by determining whether a virtual object is in a preset area and determining whether to play audio to be played in the preset area, a game player can be given a more realistic and exciting game experience.
In an optional embodiment, before determining the distribution information of at least one sound effect point, the music synthesis system acquires the current sound effect point number corresponding to at least one virtual component in the game scene and the distribution density of at least one sound effect point on at least one virtual component; and setting at least one sound effect point on at least one virtual component according to the number and the distribution density of the current sound effect points, and establishing an association relationship between the at least one sound effect point and the at least one virtual component.
Optionally, the virtual component may be a terrain in a game map or on a concrete model. As shown in fig. 9, since many of the ambient sound effects are terrain-dependent, the sound effect point can be directly bound to the terrain. Wherein the terrain may be regular terrain, for example rectangular grass. The terrain may also be irregular, such as a river, waterfall, etc. Further, as shown in fig. 10, the sound effect points may also be bound to a specific model in the game scene, for example, a beach, a house, etc., so that the sound effect points may have a flexible spatial location, and the sound effect points may also update the location along with the movement of the specific model.
Further, the number of current sound effect points and the distribution density of the sound effect points corresponding to different virtual components are different, for example, the number of sound effect points possibly needed on the grass is less, and the distribution among a plurality of sound effect points is sparse. The number of sound effect points that may be required in a river is large and the distribution among the plurality of sound effect points is dense. According to the number and the distribution density of the current sound effect points corresponding to each virtual component, the sound effect synthesis system can bind the current sound effect points and the virtual components, namely, the incidence relation between the current sound effect points and the virtual components is established.
According to the method and the device, the sound effect points can be bound on irregular terrain and can also be bound on the concrete model in the scene, so that the sound effect points are arranged on the basis of being more fit with the game scene, and the integral audio effect in the game scene is improved.
In an alternative embodiment, the sound effect synthesis system may determine a maximum sound effect point number and a maximum distribution density corresponding to the current virtual component, so as to set at least one sound effect point on the current virtual component according to the maximum sound effect point number and the maximum distribution density when the current sound effect point number is greater than or equal to the maximum sound effect point number and/or the distribution density is greater than or equal to the maximum distribution density; when the number of current sound effect points is smaller than the number of maximum sound effect points and the distribution density is smaller than the maximum distribution density, at least one sound effect point is set on the current virtual assembly according to the number of current sound effect points and the distribution density. Wherein, the current virtual component is any one of the at least one virtual component.
Optionally, when the number of current sound effect points exceeds the maximum number of sound effect points or the distribution density exceeds the maximum distribution density, the sound effect synthesis system binds the virtual components by using the maximum number of sound effect points or the maximum distribution density. For example, for virtual component A, the current number of sound effect points is 20, but the maximum number of sound effect points of virtual component A is 10, therefore, the sound effect synthesizing system will bind 10 sound effect points on virtual component A. In addition, when the current number of effect points does not exceed the maximum number of effect points or the distribution density does not exceed the maximum distribution density, the sound effect synthesis system binds the virtual components according to the current number of effect points or the distribution density. Still taking the virtual component a as an example, the number of current sound effect points is 8, and the maximum number of sound effect points of the virtual component a is 10, so that the sound effect synthesis system will bind 8 sound effect points on the virtual component a.
It should be noted that, by setting the maximum number of sound effect points and the maximum distribution density, the occupation of repeated unnecessary sounds can be avoided, thereby achieving the effect of reducing the consumption of performance.
In an alternative embodiment, when detecting that the geometric information of the at least one virtual component changes, the sound effect synthesis system acquires the changed target geometric information, and adjusts the current number of sound effect points and/or the distribution density of the at least one sound effect point of the at least one virtual component based on the target geometric information.
Optionally, in a game scene, the geometric information of the virtual component may change, for example, taking the virtual component as a poison circle, the poison circle may continuously become larger according to the game progress, that is, the geometric information of the poison circle changes. The sound effect synthesizing system monitors the change of the poison circle in real time, and adjusts the current sound effect point number of the poison circle and/or the distribution density of at least one sound effect point at any time according to the changed poison circle, for example, the number of the sound effect points is continuously increased and the distribution density is improved.
It should be noted that, by detecting the change information of the virtual component and adjusting the number and distribution density of the sound effect points in real time, it can be ensured that a game player can hear continuous and synchronous audio, which is beneficial to improving the game experience of the game player.
In an alternative embodiment, compared with the method of configuring the "sound effect ball" in the prior art, the following describes the method of playing audio in the present application by taking the sound effect synthesis system as the Messiah engine as an example. Specifically, as shown in fig. 2, in the prior art, when a Listener disposed on a shot enters an "audio ball" area along with a Game character, the Messiah engine plays an Event added to the "audio ball" and registers an Entity (i.e., Game Object) in the engine for the Event. Further, the Messiah engine acquires the coordinate information of the Entity in real time, and calculates a 3D position and a distance parameter between the Entity and the Listener, so that the Messiah engine simulates the effects of 3D sound change, sound attenuation and the like according to the 3D position and the distance parameter.
Further, as shown in fig. 3, the rendering of a "sound ball" may be illustrated by simulating a meandering stream that penetrates through an entire forest, wherein the game player desires a game effect in which he hears a stream of a gurgle while at any location near the stream; the sound of the stream becomes noticeable as it approaches the stream, and slowly dissipates away from the stream.
Optionally, as shown in fig. 4, in order to achieve the above effect, when the audio to be played is generated by configuring the "sound effect balls", a plurality of sound effect balls need to be placed on the stream, and 3D position parameters and attenuation parameters are set for the sound effect balls. After the configuration is completed, although the stream sound can be heard when listener moves to most positions in the river, there are still many problems, for example, if the number of "sound effect balls" is too large, the sound effect finally rendered is more prone to white noise with too much useless sound information, thereby imposing a hearing burden on the player. Moreover, too many "sound effect balls" are introduced, which is a huge burden for the engine, and particularly, if many environmental sound effects in a scene need to be simulated, rendering and synthesis of a large number of sound parts consume a large amount of resources, so that the performance burden brought to the engine may not be borne by the engine.
In addition, if listener is in the area between two "sound effect balls", as shown in fig. 5, sounds from two directions are transmitted to listener, thereby causing a problem of causing an audible error to the player. Further, as shown in fig. 6, in order to solve the problem of causing an auditory error to the player, the prior art can only make listener obtain richer sound by continuously increasing the number of the "sound effect balls", but if the number of the "sound effect balls" is too large, the finally synthesized sound effect is more prone to noise, thereby causing a problem of poorer audio playing effect. Moreover, because the plurality of "sound effect balls" are independent sound sources, when the listener moves among the plurality of "sound effect balls", the listener can easily hear the sound simultaneously emitted by the plurality of "sound effect balls", so that the problem that the synchronization and the continuity among the sounds are not ensured at all is caused.
Alternatively, still taking the sound effect synthesis system as the Messiah engine as an example, the present application may combine sound sources (Area sources) to realize a function of simulating that a plurality of sound sources in the same Area emit the same sound. Specifically, the following Wwise interfaces are integrated in the Messiah engine by writing a program:
MultiPositionType_MultiDirections。
the Messiah engine can reconstruct the plane sound source information by using a MultiPositionType _ MultiDirections interface, thereby realizing the effect of transmitting sound from multiple directions to a game player by using one sound source and having vivid attenuation. As shown in fig. 11, the environmental sounds emitted from the lake can be synthesized by sound effect points at 4 different positions. When listener is in position a, the lake's ambient sound comes from all directions. In particular, this can be achieved by setting a suitably high dispersion value in the Messiah engine, wherein a high dispersion value can disperse the sound so that it is played in all loudspeakers.
Optionally, when listener is located at position B, the location of listener has exceeded a maximum attenuation distance (corresponding to the audio reception range) at which the listener can hear either no sound of the lake or only a weak sound.
Alternatively, when listener is in the C position, listener will hear the sound from the lake from a loudspeaker with a large opening angle, but the sound will be attenuated due to the distance between listener and the lake. It is noted that in this case, the sound is not maximally attenuated, since listener is still within the maximum attenuation distance.
In an alternative embodiment, when the function of the Wwise interface is used, if the object shape (corresponding to the geometric shape corresponding to the distribution information) is reconstructed by overlapping a plurality of sound positions, more computing resources are required to compute the target position of the audio to be played every time a new sound effect point is added, so that the consumption of audio rendering may be increased. Therefore, the function of the Wwise interface can be optimized in the Messiah engine.
Specifically, an ambiance component using a Wwise interface can be added to the Messiah engine, and a configuration brush tool of sound effect points is introduced to the ambiance component. In addition, because the sound effect points are bound on specific positions in a game scene, when the final sound rendering effect is controlled through the geometric shape formed by the sound effect points, the computer cannot additionally bear excessive audio information, and therefore the effect of reducing the consumption of computing resources is achieved.
It should be noted that, for the Messiah engine, the number and distribution density of the sound-effect points are not important, what is important is the geometry composed among the sound-effect points, and the sound-effect synthesis of the Listener in the effective area ultimately depends on the geometry composed among the sound-effect points. In addition, in the Messiah engine, sound effect points are strictly aligned into the terrain grid, so the ambiance component must bind a terrain component.
Alternatively, the following is a specific flow description for configuring the ambiance component in the Messiah engine. As shown in FIG. 12, an Entity is first created in the Messiah engine, and then the right key, "Add" and "Ambience" are clicked in turn in the Details panel. In addition, the Details panel also shows components such as "point light", "spot light", "area light", "sphere local environment volume", "cube local environment volume", "reflection probe", "light probe", "point closed", "SH volume", "visibility cup", "level volume", "trap volume", and "audio".
Further, after entering the setting of the ambiance component, the Messiah engine binds a terrain component in the game scene, where fig. 13 is a setting interface of the ambiance component, which is specifically described as follows:
enabled represents whether the Ambience component is Enabled, wherein if the Enabled is not Enabled, the sound effect point cannot be effective in the game scene, and a sound effect point block cannot be generated;
event Param represents an Event name, which corresponds to the Event name in bnk under the Sounds directory;
the Preview Color represents a Preview Color for previewing the Color of the sound effect point square in the game scene;
the review Always represents the preview, which is used for previewing the sound effect point square in the game scene when the component is not selected;
preview Size represents a Preview Size for previewing the Size of the sound effect point tile;
play In Editor shows that if the item is not selected, the audio to be played is not played when the sound effect point is edited;
refresh represents refreshing, which is used for refreshing the data of the whole audio to be played;
max Anchors represents the maximum number of anchor points that characterize the maximum number of available anchor points (i.e., sound effect points) beyond which excess anchor points are ignored;
num Anchors represents the number of anchor points, which represents the number of current anchor points (i.e., the number of current sound effect points);
clear Anchors represent Clear Anchors, which are used to delete all Anchors and this operation is not revocable;
connected terrains represent Connected terrains, represent the terrains effective at the current anchor point, and if a plurality of terrains exist in a scene, the terrains to be bound for the audio to be played need to be determined;
select Terrain represents a selection Terrain for cyclically selecting a Terrain among visible terrains in the current game scene;
the restart In Volume represents that the audio is limited In the area, if the item is selected, the audio to be played only takes effect In the set area (namely, the preset area), if the Listener leaves the area, the audio to be played stops playing, and when the area is entered again, the audio to be played can restart playing;
the Restrict Volume Shape represents a preset area Shape, and is used for setting the Shape of the preset area;
max Play Times represents the maximum number of plays, which is used to set the number of Times that audio to be played is repeatedly played when listener is within a preset region, where 0 represents no limit;
the Fade Out Time represents Fade-Out Time which is used for setting Fade-Out Time when the audio to be played stops playing;
the Fade Out Type indicates a Fade-Out mode for setting a Fade-Out mode when the audio to be played stops playing.
In addition, as shown in fig. 14, a brush tool for configuring sound effect points in the ambiance component can be displayed in the form of a control, so that an operator can conveniently click and open the brush tool directly.
Optionally, as shown in fig. 15, when a sound effect point is bound to a virtual component such as a Terrain, two controls, namely, a Select Terrain control and an Action control, may be clicked in sequence, and then a virtual circle mounted brush may be seen in a game scene. Wherein. In FIG. 15, Max Anchors has a value of "512", Num Anchors has a value of "0", semi circle rtpc and Restrict In Volume have a value of "x", Restrict In Volume has a value of "Sphere", and semi circle rtpc update has a value of "0.2".
Further, as shown in fig. 16, in the GIZMO panel, the distribution density, the number of sound effect points can be changed by adjusting Radius and Alignment. After the adjustment is finished, the mouse is moved to a game scene interface, and a sound effect point can be generated on the Terrain by clicking the Terrain (Terrain) material with the left key. In addition, as shown in FIG. 17, if the sound effect point is to be eliminated, the ALT is held down, and the elimination area is eliminated by left-clicking.
Optionally, as shown in fig. 18, when an audio point is bound to a virtual component such as a specific model such as a geometric model, an operator may first click two controls, an Add primative Binder and an Action, in the Messiah engine in sequence, and then the Messiah engine displays the created component in the ambiance component. In FIG. 18, the controls such as "retrieved volume shape", "semi-circle rtpc radius", "semi-circle rtpc update", "semi-circle rtpc LF", and "semi-circle rtpc RF" are also shown.
Further, as shown in fig. 19 and 20, the operator may select any one of the specific models in the game scene and copy the name of the parent model, for example, "cjc _ building _ niaoju _01_ 002". Then, the operator can paste the Name into the "Entity Name" in the "primative Binders" directory, and the Messiah engine can generate the corresponding sound effect point on the specific model, wherein the generated display effect can be shown in fig. 21.
From the above, as shown in fig. 22, the process of configuring sound effect points on the virtual components by the operator through the environment components can be summarized as follows: firstly, defining environment component parameters, then selecting specific binding types, wherein the binding types can be divided into binding sound effect points on a terrain-like virtual component and binding sound effect points on a geometry-like virtual component, and finally generating the sound effect points correspondingly according to the type of the virtual component.
It should be noted that, through the above process, not only the problem that the virtual component in the game scene cannot be accurately attached to for sound configuration in the prior art is solved, but also the problem that the audio to be played cannot be set in large scale in batch in the prior art is solved, so that the effect of improving the configuration efficiency of the audio to be played is realized.
According to the content, in the application, the target playing position of the audio to be played can be determined only by determining the distribution situation of the plurality of sound effect points. Because the 'sound effect balls' are not required to be configured in the application, the audio effect of the game scene does not depend on the number, the placing position and the placing range of the 'sound effect balls', so that the problem of complicated steps in the 'sound effect ball' configuration process is avoided, and the configuration efficiency of the audio to be played is improved. In addition, this application can not receive the influence in topography area when generating virtual audio point, no matter whether the topography area is rule, audio point can all realize with the regional matching of topography to it is unified, complete, continuous to have ensured the audio effect of final generation, and has the sense of direction more, and then has promoted game player's gaming experience.
Therefore, through the technical scheme of the application, the purpose of improving the audio playing effect is achieved, the effect of improving the configuration efficiency of the audio to be played is achieved, and the technical problem that the audio playing effect is poor due to the fact that the placed sound source is low in fitting degree with the game scene is solved.
Example 2
According to another aspect of the embodiment of the invention, an apparatus for playing audio is also provided. Fig. 23 is a schematic diagram of an alternative apparatus for playing audio according to an embodiment of the present application, and as shown in fig. 23, the apparatus may include: a first obtaining module 2301, configured to obtain a position of a virtual character in a game scene; a second obtaining module 2302, configured to obtain an audio receiving range of the virtual object in the game scene; a first determining module 2303, configured to determine distribution information of at least one sound effect point in an audio receiving range, where the at least one sound effect point is used to identify at least one sound source position of an audio to be played; a second determining module 2304, configured to determine, based on the distribution information, an audio to be played corresponding to the audio receiving range and a target playing position corresponding to the audio to be played; a playing module 2305, configured to play the audio to be played at the target playing position.
It should be noted that the first obtaining module 2301, the second obtaining module 2302, the first determining module 2303, the second determining module 2304 and the playing module 2305 in this embodiment correspond to steps S702 to S710 in embodiment 1, respectively.
Optionally, the second obtaining module further includes: a third obtaining module and a third determining module. The third acquisition module is used for acquiring the target position and the target orientation of the virtual character in the game scene; and the third determining module is used for determining the audio receiving range of the virtual radio according to the target position and the target orientation.
Optionally, the second determining module further includes: the device comprises a fourth acquisition module, a fourth determination module and a synthesis module. The fourth acquisition module is used for acquiring the geometric shape corresponding to the distribution information; the fourth determining module is used for determining the target playing position of the audio to be played according to the geometric shape; and the synthesis module is used for synthesizing the audio corresponding to the at least one sound effect point according to the geometric shape to obtain the audio to be played.
Optionally, the fourth determining module further includes: a fifth determination module, a sixth determination module, and a seventh determination module. The fifth determining module is used for determining the type of the sound effect point corresponding to at least one sound effect point; the sixth determining module is used for determining the geometric shape formed by the sound effect points of each type when the number of the sound effect point types is multiple; and the seventh determining module is used for determining the target playing position corresponding to each type of sound effect point according to the geometric shape formed by each type of sound effect point.
Optionally, the playing module further includes: the device comprises a sixth acquisition module, a detection module and a first playing module. The sixth obtaining module is used for obtaining an audio playing range corresponding to the audio to be played; the detection module is used for detecting whether the target position of the virtual radio is within an audio playing range or not to obtain a detection result; and the first playing module is used for playing the audio to be played at the target playing position when the detection result indicates that the target position is within the audio playing range.
Optionally, the apparatus for playing audio further includes: and the playing stopping module is used for stopping playing the audio to be played at the target playing position when the detection result indicates that the target position is out of the audio playing range.
Optionally, the first playing module further includes: the device comprises a first detection module and a second playing module. The first detection module is used for detecting whether the target playing position is located in a preset area; and the second playing module is used for playing the audio to be played at the target playing position if the target playing position is located in the preset area and the virtual radio is located in the preset area.
Optionally, the apparatus for playing audio further includes: and the first playing stopping module is used for stopping playing the audio to be played at the target playing position if the target playing position is located in the preset area and the virtual radio is located outside the preset area.
Optionally, the apparatus for playing audio further includes: a seventh obtaining module and a setting module. The seventh obtaining module is used for obtaining the current sound effect point quantity corresponding to at least one virtual component in the game scene and the distribution density of at least one sound effect point on the at least one virtual component; and the setting module is used for setting at least one sound effect point on at least one virtual component according to the number and the distribution density of the current sound effect points and establishing an incidence relation between the at least one sound effect point and the at least one virtual component.
Optionally, the setting module further includes: the device comprises an eighth determining module, a first setting module and a second setting module. The eighth determining module is configured to determine the maximum number of sound effect points and the maximum distribution density corresponding to the current virtual component, where the current virtual component is any one of the at least one virtual component; the first setting module is used for setting at least one sound effect point on the current virtual component according to the maximum sound effect point number and the maximum distribution density when the current sound effect point number is larger than or equal to the maximum sound effect point number and/or the distribution density is larger than or equal to the maximum distribution density; and the second setting module is used for setting at least one sound effect point on the current virtual component according to the current sound effect point quantity and the distribution density when the current sound effect point quantity is smaller than the maximum sound effect point quantity and the distribution density is smaller than the maximum distribution density.
Optionally, the apparatus for playing audio further includes: an eighth obtaining module and an adjusting module. The eighth obtaining module is configured to obtain changed target geometric information when it is detected that the geometric information of the at least one virtual component changes; and the adjusting module is used for adjusting the current sound effect point number and/or the distribution density of at least one sound effect point of at least one virtual component based on the target geometric information.
Example 3
According to another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to execute the method for playing audio in the above embodiment 1 when running.
Example 4
According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including one or more processors; a storage device for storing one or more programs which, when executed by one or more processors, cause the one or more processors to implement a method for running the programs, wherein the programs are configured to perform the method for playing audio in embodiment 1 described above when running.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (14)

1. A method of playing audio, comprising:
acquiring the position of a virtual character in a game scene;
acquiring an audio receiving range of a virtual radio in the game scene according to the position of the virtual character, wherein the virtual radio is bound at a specific position relatively fixed with the virtual character;
determining distribution information of at least one sound effect point in the audio receiving range, wherein the at least one sound effect point is preset in the game scene and is used for identifying at least one sound source position;
determining the audio to be played corresponding to the audio receiving range and a target playing position corresponding to the audio to be played based on the distribution information;
and playing the audio to be played at the target playing position.
2. The method of claim 1, wherein obtaining the audio receiving range of the virtual radio in the game scene according to the position of the virtual character comprises:
acquiring a target position and a target orientation of the virtual character in the game scene;
and determining the audio receiving range of the virtual radio according to the target position and the target orientation.
3. The method according to claim 1, wherein determining the audio to be played corresponding to the audio receiving range and the target playing position corresponding to the audio to be played based on the distribution information comprises:
acquiring a geometric shape corresponding to the distribution information;
determining a target playing position of the audio to be played according to the geometric shape;
and synthesizing the audio corresponding to the at least one sound effect point according to the geometric shape to obtain the audio to be played.
4. The method of claim 3, wherein determining the target playing position of the audio to be played according to the geometric shape comprises:
determining the type of the sound effect point corresponding to the at least one sound effect point;
when the number of the sound effect point types is multiple, determining the geometric shape formed by the sound effect points of each type;
and determining a target playing position corresponding to each type of sound effect point according to the geometric shape formed by each type of sound effect point.
5. The method of claim 1, wherein playing the audio to be played at the target playing position comprises:
acquiring an audio playing range corresponding to the audio to be played;
detecting whether the target position of the virtual radio is within the audio playing range or not to obtain a detection result;
and when the detection result indicates that the target position is within the audio playing range, playing the audio to be played at the target playing position.
6. The method of claim 5, further comprising:
and when the detection result indicates that the target position is out of the audio playing range, stopping playing the audio to be played at the target playing position.
7. The method according to claim 5, wherein when the detection result indicates that the target position is within the audio playing range, playing the audio to be played at the target playing position comprises:
detecting whether the target playing position is located in a preset area or not;
and if the target playing position is located in the preset area and the virtual radio is located in the preset area, playing the audio to be played at the target playing position.
8. The method of claim 7, further comprising:
and if the target playing position is located in the preset area and the virtual radio is located outside the preset area, stopping playing the audio to be played at the target playing position.
9. The method of claim 1, wherein prior to determining the distribution information of the at least one sound effect point, the method further comprises:
acquiring the number of current sound effect points corresponding to at least one virtual component in the game scene and the distribution density of the at least one sound effect point on the at least one virtual component;
and setting the at least one sound effect point on the at least one virtual component according to the number of the current sound effect points and the distribution density, and establishing an incidence relation between the at least one sound effect point and the at least one virtual component.
10. The method of claim 9, wherein setting the at least one sound effect point on the at least one virtual component according to the current number of sound effect points and the distribution density comprises:
determining the maximum sound effect point number and the maximum distribution density corresponding to the current virtual component, wherein the current virtual component is any one of the at least one virtual component;
when the number of the current sound effect points is larger than or equal to the maximum number of the sound effect points and/or the distribution density is larger than or equal to the maximum distribution density, setting the at least one sound effect point on the current virtual component according to the maximum number of the sound effect points and the maximum distribution density;
the current sound effect point quantity is less than the maximum sound effect point quantity, and the distribution density is less than when the maximum distribution density, according to the current sound effect point quantity and the distribution density is in set up on the current virtual component at least one sound effect point.
11. The method of claim 9, further comprising:
when detecting that the geometric information of the at least one virtual component changes, acquiring the changed target geometric information;
and adjusting the current sound effect point number of the at least one virtual component and/or the distribution density of the at least one sound effect point based on the target geometric information.
12. An apparatus for playing audio, comprising:
the first acquisition module is used for acquiring the position of the virtual character in a game scene;
the second acquisition module is used for acquiring the audio receiving range of a virtual radio in the game scene according to the position of the virtual character, wherein the virtual radio is bound at a specific position which is relatively fixed with the virtual character;
the first determining module is used for determining the distribution information of at least one sound effect point in the audio receiving range, wherein the at least one sound effect point is preset in a game scene and is used for identifying at least one sound source position;
a second determining module, configured to determine, based on the distribution information, an audio to be played corresponding to the audio receiving range and a target playing position corresponding to the audio to be played;
and the playing module is used for playing the audio to be played at the target playing position.
13. A computer-readable storage medium, in which a computer program is stored, wherein the computer program is arranged to execute the method of playing audio according to any one of claims 1 to 11 when running.
14. An electronic device, wherein the electronic device comprises one or more processors; storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement a method for running a program, wherein the program is arranged to perform the method of playing audio of any of claims 1 to 11 when run.
CN202111632559.2A 2021-12-28 2021-12-28 Audio playing method and device and electronic equipment Pending CN114404973A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111632559.2A CN114404973A (en) 2021-12-28 2021-12-28 Audio playing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111632559.2A CN114404973A (en) 2021-12-28 2021-12-28 Audio playing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN114404973A true CN114404973A (en) 2022-04-29

Family

ID=81270263

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111632559.2A Pending CN114404973A (en) 2021-12-28 2021-12-28 Audio playing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN114404973A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115050228A (en) * 2022-06-15 2022-09-13 北京新唐思创教育科技有限公司 Material collecting method and device and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115050228A (en) * 2022-06-15 2022-09-13 北京新唐思创教育科技有限公司 Material collecting method and device and electronic equipment
CN115050228B (en) * 2022-06-15 2023-09-22 北京新唐思创教育科技有限公司 Material collection method and device and electronic equipment

Similar Documents

Publication Publication Date Title
US20200296532A1 (en) Sound reproduction method and apparatus, storage medium, and electronic apparatus
KR102609668B1 (en) Virtual, Augmented, and Mixed Reality
US11950084B2 (en) 3D audio rendering using volumetric audio rendering and scripted audio level-of-detail
WO2015127890A1 (en) Method and apparatus for sound processing in three-dimensional virtual scene
US8626321B2 (en) Processing audio input signals
US7019742B2 (en) Dynamic 2D imposters of 3D graphic objects
KR20200047414A (en) Systems and methods for modifying room characteristics for spatial audio rendering over headphones
CN109327795A (en) Sound effect treatment method and Related product
CN108379842A (en) Gaming audio processing method, device, electronic equipment and storage medium
WO2020149893A1 (en) Audio spatialization
EP4101182A1 (en) Augmented reality virtual audio source enhancement
CN114404973A (en) Audio playing method and device and electronic equipment
US8644520B2 (en) Morphing of aural impulse response signatures to obtain intermediate aural impulse response signals
US20210322880A1 (en) Audio spatialization
CN114821010A (en) Virtual scene processing method and device, storage medium and electronic equipment
CN113941151A (en) Audio playing method and device, electronic equipment and storage medium
CN109683845B (en) Sound playing device, method and non-transient storage medium
CN112717395B (en) Audio binding method, device, equipment and storage medium
Röber et al. Authoring of 3D virtual auditory Environments
CN109840882A (en) Erect-position matching process and device based on point cloud data
CN113318432B (en) Music control method in game, nonvolatile storage medium and electronic device
Catalano Virtual Reality In Interactive Environments: A Comparative Analysis Of Spatial Audio Engines
Ferreira Creating Immersive Audio in a Historical Soundscape Context
CN118286685A (en) Audio playing method and device, electronic equipment and readable storage medium
CN117065345A (en) Audio processing method and device in game and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination