CN114288656A - Virtual sound source object setting method and device, electronic equipment and medium - Google Patents

Virtual sound source object setting method and device, electronic equipment and medium Download PDF

Info

Publication number
CN114288656A
CN114288656A CN202111669705.9A CN202111669705A CN114288656A CN 114288656 A CN114288656 A CN 114288656A CN 202111669705 A CN202111669705 A CN 202111669705A CN 114288656 A CN114288656 A CN 114288656A
Authority
CN
China
Prior art keywords
sound source
virtual sound
source object
virtual
volume
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111669705.9A
Other languages
Chinese (zh)
Inventor
朱晟达
周思桢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Perfect Time And Space Software Co ltd
Original Assignee
Shanghai Perfect Time And Space Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Perfect Time And Space Software Co ltd filed Critical Shanghai Perfect Time And Space Software Co ltd
Priority to CN202111669705.9A priority Critical patent/CN114288656A/en
Publication of CN114288656A publication Critical patent/CN114288656A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses a method, a device, electronic equipment and a medium for setting a virtual sound source object, wherein the method comprises the following steps: acquiring relative position information between a virtual sound source object and a listener object in a current game scene, wherein the virtual sound source object is controlled by a virtual character, the virtual sound source object is provided with a virtual sound source and a collision bounding volume, and the collision bounding volume is used for representing the volume form of the virtual sound source; and setting the virtual sound source object to a target position according to the relative position information so as to realize the three-dimensional sound effect of the virtual sound source. The invention can solve the technical problem that the prior art can not support the setting of the virtual sound source object with the volume.

Description

Virtual sound source object setting method and device, electronic equipment and medium
Technical Field
The invention relates to the technical field of internet, in particular to a method and a device for setting a virtual sound source object, electronic equipment and a medium.
Background
In a game, sound is one of the indispensable important elements. The rich sound configuration can pull the distance between the virtual world and the real world. Different game scenes are configured with different sounds, so that the reality sense of the game scenes and the fit degree with the real world can be increased.
At present, the generation and setting of sound in a game scene generally adopt a plurality of point sound sources matched with a corresponding position in the game scene. In other words, the conventional technique supports only the setting of a point virtual sound source, and cannot support the setting of a virtual sound source (i.e., a virtual sound source object) having a volume. Therefore, it is desirable to provide a scheme for supporting the placement of a virtual sound source object having a volume.
Disclosure of Invention
The embodiment of the invention provides a method, a device, an electronic device and a medium for setting a virtual sound source object, and solves the technical problem that the setting of a virtual sound source object with a volume cannot be supported in the prior art.
In one aspect, the present invention provides a method for setting a virtual sound source object according to an embodiment of the present invention, where the method includes:
acquiring relative position information between a virtual sound source object and a listener object in a current game scene, wherein the virtual sound source object is controlled by a virtual character, the virtual sound source object is provided with a virtual sound source and a collision bounding volume, and the collision bounding volume is used for representing the volume form of the virtual sound source;
and setting the virtual sound source object to a target position according to the relative position information so as to realize the three-dimensional sound effect of the virtual sound source.
Optionally, the setting the virtual sound source object to the target position according to the relative position information includes:
judging whether the position of the listener object is within the volume range of the virtual sound source object or not according to the relative position information;
if yes, setting the position of the virtual sound source object to the position of the listener object;
if not, setting the position of the virtual sound source object to a target position, wherein the target position is any target position in the volume range of the virtual sound source object.
Optionally, the target position is a position of a point on the surface of the virtual sound source object closest to the position of the listener object, so as to implement a function that the virtual sound source starts to attenuate from outside the volume range of the virtual sound source object.
Optionally, the method further comprises:
obtaining skeletal animation data of the virtual character, wherein the skeletal animation data comprises a skeletal hierarchy of the virtual character;
and associating the virtual sound source object with the skeleton hierarchical structure of the virtual character, and realizing the position change and the volume change of the virtual sound source object relative to the target skeleton of the virtual character through the skeleton animation data of the virtual character.
Optionally, the implementing, by the bone animation data of the virtual character, the position change and the volume change of the virtual sound source object relative to the target bone of the virtual character includes:
determining the motion trail of the virtual sound source object relative to the target skeleton according to the skeleton animation data of the virtual character;
and controlling the virtual sound source object to move along the motion trail, and synchronously changing the volume of the virtual sound source object.
Optionally, the controlling the virtual sound source object to move along the motion trajectory and synchronously changing the volume of the virtual sound source object includes:
moving the position of the virtual sound source object along the motion track, and zooming the volume of the virtual sound source object according to a preset zooming size during the moving;
and the position movement and the volume scaling of the virtual sound source object are transformed to the sound effect presentation of the virtual sound source object through an audio middleware, so that the sound effect of the virtual sound source object changes along with the position change and the volume change of the virtual sound source object.
Optionally, the method further comprises:
setting a face orientation of the listener object to be the same as an orientation of a virtual camera corresponding to the listener object.
In another aspect, the present invention provides a virtual sound source object setting device according to an embodiment of the present invention, where the device includes an obtaining module and a setting module, where:
the acquisition module is used for acquiring relative position information between a virtual sound source object and a listener object in a current game scene, wherein the virtual sound source object is controlled by a virtual character, the virtual sound source object is provided with a virtual sound source and a collision bounding volume, and the collision bounding volume is used for representing the volume form of the virtual sound source;
and the setting module is used for setting the virtual sound source object to a target position according to the relative position information so as to realize the three-dimensional sound effect of the virtual sound source.
For the content that is not introduced or not described in the embodiments of the present invention, reference may be made to the related descriptions in the foregoing method embodiments, and details are not repeated here.
In another aspect, the present invention provides an electronic device according to an embodiment of the present invention, the electronic device including: a processor, a memory, a communication interface, and a bus; the processor, the memory and the communication interface are connected through the bus and complete mutual communication; the memory stores executable program code; the processor executes a program corresponding to the executable program code by reading the executable program code stored in the memory, for executing the virtual sound source object setting method as described above.
In another aspect, the present invention provides a computer-readable storage medium storing a program that, when executed on an electronic device, executes the virtual sound source object setting method as described above.
One or more technical solutions provided in the embodiments of the present invention have at least the following technical effects or advantages: the invention obtains the relative position information between a virtual sound source object and a listener object in the current game scene, wherein the virtual sound source object is controlled by a virtual character, the virtual sound source object is provided with a virtual sound source and a collision bounding volume, and the collision bounding volume is used for representing the volume form of the virtual sound source. In the above solution, the position of the virtual sound source object having the corresponding volume is set according to the relative position information between the virtual sound source object and the listener object, so as to realize the three-dimensional sound effect of the virtual sound source object.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a method for setting a virtual sound source object according to an embodiment of the present invention.
Fig. 2 is a schematic diagram illustrating an effect of a motion of a virtual sound source object according to an embodiment of the present invention.
Fig. 3 is a schematic structural diagram of a virtual sound source object setting device according to an embodiment of the present invention.
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to better understand the technical solution, the technical solution will be described in detail with reference to the drawings and the specific embodiments.
First, it is stated that the term "and/or" appearing herein is merely one type of associative relationship that describes an associated object, meaning that three types of relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Fig. 1 is a schematic flow chart of a method for setting a virtual sound source object according to an embodiment of the present invention. The method shown in fig. 1 may be applied to an electronic device, such as a smart phone, a tablet computer, etc., and the present invention is not limited thereto. The method comprises the following implementation steps:
s101, obtaining relative position information between a virtual sound source object and a listener object in a current game scene, wherein the virtual sound source object is controlled by a virtual character, the virtual sound source object is provided with a virtual sound source and a collision surrounding body, and the collision surrounding body is used for representing the volume form of the virtual sound source.
The virtual sound source object of the present invention is provided with a virtual sound source and a collision enclosure, which is usually simulated/represented by a three-dimensional collision volume (referred to as a 3D collision volume for short) in a game engine (for example, Unity engine). The shape and size of the 3D collision volume (i.e., the collision bounding volume of the virtual source object) is not a limitation of the present invention, for example, the shape of the 3D collision volume may include, but is not limited to, a cuboid, a cube, a capsule, a sphere, a polyhedron, or other custom mesh shape.
The virtual sound source is the source of sound generated/emitted by the virtual sound source object when the virtual sound source object is manipulated. The virtual sound source is controlled by a virtual character, and the virtual character can be understood as a game object which needs to be sounded in a game, such as a game character, an opponent character, a BOSS, a monster, an NPC, even a scene object and other game objects which need to be sounded. The player character in the scheme refers to the current game character operated by the player. The listener object is a component for receiving audio, which is hung on the player character model, can be understood as a radio device such as a microphone, provides an audio listening function for a game character controlled by a player, does not belong to a part of the player character model, but has an incidence relation with the position of the player character, and the change of the position of the player character can drive the change of the position of the player character model, so that the change of the position of the listener object is driven; the listener object is set at a head position of a player character operated by a player, for example, when the player manipulates a current game character; as another example, the listener object may be hung on a virtual camera while playing a cut scene. Optionally, the present invention may set the face orientation of the listener object to be the same as/consistent with the orientation of the virtual camera pre-configured to correspond to the listener object in the current game scene, so as to ensure that the direction of the sound source heard by the listener object (i.e. the sound source direction) is consistent with the viewing angle direction of the game screen viewed by the listener object.
Specifically, in a game, for example, when a player combines a plurality of teammate characters in a certain scene to attack a huge BOSS. Suppose that Boss attacks the left and right hands, respectively, while whooshing.
At this time, it simultaneously manipulates three virtual source objects:
A. following the movement of the mouth position, it is responsible for playing the hoarse voice
B. Move along with the left hand and is responsible for playing the skill sound effect of the left hand
C. Move along with the right hand and is responsible for playing the skill sound effect of the right hand
Suppose that the player plays the character now in front of and facing Boss, because the listener object is in the player's head, then the player should hear the following sound expression:
boss's whooking sounds to come from the top and forth (height is much higher than the player because Boss is large)
Boss left hand technique sound from right side and move with Boss left hand
The sound effect of Boss's right hand technique sounds as coming from the left side and moves with Boss's right hand
In an embodiment, the present invention may use a collision detection technique to detect whether the position of the listener object is within the volume range of the virtual sound source object in the current game scene, so as to obtain the relative position information between the virtual sound source object and the listener object in the current game scene. Wherein the relative position information is particularly useful for indicating whether the position of the listener object is within the volume of the virtual audio source object.
In a specific implementation, the present invention may use a preset collision detection algorithm to detect the position information between the collision bounding volume of the virtual sound source object and the listener object in the current game scene, so as to obtain the relative position information between the listener object and the virtual sound source object. The collision detection algorithm includes, but is not limited to, an intersection test collision detection algorithm, a distance query collision detection algorithm, a gap overlap collision detection algorithm, and the like.
S102, setting the virtual sound source object to a target position according to the relative position information so as to realize the three-dimensional sound effect of the virtual sound source.
In an embodiment, the present invention determines whether the position of the listener object is within the volume range of the virtual sound source object according to the relative position information. If yes, setting the position of the virtual sound source object to the position of the listener object; if not, setting the position of the virtual sound source object to a target position, wherein the target position is any target position in the volume range of the virtual sound source object.
In a specific implementation, the position of the virtual sound source object can be determined/set according to the relative position information. Specifically, when it is determined that the position of the listener object is within the volume range of the virtual sound source object based on the relative position information, the present invention can set the virtual sound source object at the position of the listener object, that is, set the position of the virtual sound source object to the position of the listener object, which can achieve the effect of no attenuation of the sound volume of the virtual sound source within the volume range.
Accordingly, when it is determined that the position of the listener object is not within the volume range of the virtual sound source object according to the relative position information, the present invention can set the position of the virtual sound source object to a target position; the target position is any position within the volume range of the virtual sound source object, for example, any position on the surface of the virtual sound source object. Preferably, the target position is a position of a point on the surface of the virtual sound source object that is closest to the position of the listener object, so that an effect that the sound volume of the virtual sound source outside the volume range starts to gradually attenuate can be achieved. Wherein the surface of the virtual sound source object is the surface of the virtual sound source object bounding box.
Some alternative embodiments to which the invention relates are described below.
In some optional embodiments, when the virtual character operates the virtual sound source object, the present invention may use the virtual character (or a predetermined body part of the virtual character, such as an arm, a hand bone, a leg, and the like) as a parent, and control the virtual sound source (i.e., the virtual sound source object corresponding to the virtual sound source) to move along a predetermined motion trajectory of the parent, so as to implement that the virtual sound source object moves along with the motion of the virtual character. The preset motion track can be set by a system in a self-defined mode, such as a parabolic motion track, a linear motion track and the like.
Further optionally, the present invention may collect motion animations of the virtual sound source object moving along the preset motion trajectory in real time or periodically, where the motion animations may be skeleton animations or keyframe animations, and the present invention is not limited thereto. Wherein the skeleton animation is one of model animations in which a model has a skeleton structure consisting of interconnected "bones", and the animation is generated for the model by changing the oriented good positions of the bones. The key frame animation is a frame animation for representing a key state of a sound source object.
It will be appreciated that the present invention utilizes various Animation tools provided by the game engine (e.g., Animation, Timeline, etc. in the Unity engine) to define an Animation by adding keyframes of properties such as position, rotation, and zoom, and uses the Animation to determine the motion of the virtual sound source object.
In some alternative embodiments, the present invention may obtain skeletal animation data of the virtual character, the skeletal animation data including a skeletal hierarchy of the virtual character, for example, a skeletal hierarchy of body parts of the virtual character, such as an arm-palm-finger skeletal hierarchy, and the like. Furthermore, the virtual sound source object and the skeleton hierarchical structure of the virtual character can be associated, and the position change and the volume change of the virtual sound source object relative to the target skeleton of the virtual character are realized through the skeleton animation data of the virtual character, namely the position change and the volume change of the virtual sound source object along with the motion of the virtual character are realized.
The specific implementation manner of the invention for realizing the position change and the volume change of the virtual sound source object relative to the target skeleton of the virtual character through the skeleton animation data of the virtual character can be as follows: and determining the motion trail of the virtual sound source object relative to the target skeleton according to the skeleton animation data of the virtual character. And then controlling the virtual sound source object to move along the motion track, and synchronously changing the volume of the virtual sound source object so as to realize the effect that the virtual sound source object moves away from or towards the virtual character.
In a specific implementation, the present invention may collect a series of skeleton animation data generated when the virtual character manipulates the virtual sound source object, where the number of the skeleton animation data is not limited in the present invention, and may be one or more. Further, the present invention can determine the motion trajectory of the virtual sound source object relative to the target skeleton of the virtual character according to the collected skeleton animation data of the virtual character, and two possible embodiments thereof are described below.
In an example embodiment, when the virtual character is an opponent character corresponding to a player character, the present invention may parse the skeletal animation data, and determine the position of the virtual character (or specifically, the position of the target skeleton of the virtual character) in the skeletal animation data. Then, the position of the player character may be acquired, and then the motion trajectory may be generated according to the determined position of the virtual character and the position of the player character, and specifically, the motion trajectory may be generated according to the position of the opponent character, the position of the player character, and relative position information (such as a relative position direction and a relative position distance) between the player character and the opponent character. In this example, the start position of the motion trajectory is the position of the opponent character, and the end position of the motion trajectory is the position of the player character, so that the effect of moving the virtual sound source object from the opponent character to the player character can be achieved. The motion trail involved in the present invention includes, but is not limited to, a straight motion trail, a parabolic motion trail or other curvilinear motion trail.
In another exemplary embodiment, the present invention may analyze each frame of the collected skeletal animation data, and analyze and obtain information such as the position of the virtual character, the manipulation direction and the manipulation strength of the virtual sound source object. Then, the motion trajectory can be generated according to the position of the virtual character, the manipulation direction and the manipulation strength of the virtual sound source object, specifically, the invention can generate the trajectory of the analyzed information such as the position of the opponent character, the manipulation direction and the manipulation strength of the sound source object by using a preset trajectory prediction model, wherein the trajectory prediction model is a pre-trained model and can include but is not limited to a neural network model or other deep learning models. In this example, the starting point position of the motion trajectory may be the position of the virtual character, and the ending point position of the motion trajectory is not limited, and may be, for example, the position of an opponent character corresponding to the virtual character.
After the motion trail is obtained, the invention can control the virtual sound source object to move along the motion trail and synchronously change the volume of the virtual sound source object. Specifically, when the virtual sound source object moves along the motion trajectory, the position of the virtual sound source object is also moved, and the volume of the virtual sound source object is synchronously scaled according to a preset scaling size during movement, so that the virtual sound source object synchronously changes position and volume along with the motion of the target skeleton. The preset scaling size is determined by a preset scaling animation corresponding to the virtual sound source model, for example, the virtual sound source model is a ball of fire, after the virtual character releases the attack skill of the fire range, the fire will gradually disappear along with time, and along with the continuous reduction of the volume of the fire, the sound effect is also reduced along with the reduction of the volume.
In an alternative embodiment, the invention can also transform the position movement and the volume scaling of the virtual sound source object to the sound effect presentation of the virtual sound source object through the audio middleware, so as to realize that the sound effect of the virtual sound source object changes along with the position change and the volume change of the virtual sound source object. For example, when the position and/or volume of the virtual sound source object changes, the volume of the virtual sound source corresponding to the virtual sound source object can be synchronously adjusted in an enlarging or reducing manner, for example, when the virtual sound source object is far away from the listener object along the motion trajectory, the volume of the virtual sound source object can be controlled to gradually decrease, so as to generate the effect of volume attenuation. For another example, when the virtual sound source object moves along the motion trajectory, the volume of the virtual sound source object is reduced and then enlarged, that is, the volume of the virtual sound source object is reduced from large to small, and when the volume of the virtual sound source object is further reduced from large to small, the volume of the virtual sound source object is correspondingly controlled to be increased from small to large, and then the volume of the virtual sound source object is changed from large to small, so that the sound of the virtual sound source object changes along with the change of the volume of the virtual sound source object.
In an optional embodiment, the sound effect change of the virtual sound source object may also be changed according to the position relationship between the listener object and the virtual character and the game logic, for example, there is a play scene of stealing in the game, and in the current scene, when the virtual character approaches the player character, the distance between the virtual sound source object and the player can be intelligently determined, that is, the position of the virtual sound source object relative to the player character or the listener object changes, and the closer the position is, the smaller the volume of the virtual sound source sound controlled by the virtual character becomes, so as to create the sense of reality of the sound effect of reducing the sound emitted by the virtual sound source object for stealing enemies.
For example, please refer to fig. 2, which shows a schematic diagram of the effect of a possible virtual sound source object motion. As shown in fig. 2, in the process of the motion of the virtual sound source object, the present invention can control the volume of the virtual sound source object to change along with the motion of the motion trajectory, and at the same time, control the volume of the virtual sound source object to change along with the change of the volume of the virtual sound source object. For example, in the illustration, during the process of moving the virtual sound source object from the opponent character to the player character along the motion trajectory, the volume of the virtual sound source object may be controlled to change according to the preset enlargement size. Optionally, the volume of the virtual sound source object may be controlled to gradually increase according to a preset volume, so as to achieve the effect that the volume of the virtual sound source object gradually increases along the motion trajectory, and the volume gradually increases.
In other words, in the optional embodiment, during the process of the motion of the virtual sound source object, the present invention may also perform the synchronization process of the preset sound source effect on the virtual sound source object at the same time. The preset sound source effect is set by the system in a self-defined mode, such as the change effect of attributes of the sound source volume, rotation, position, sound source volume and the like. The following exemplary presents several possible embodiments of the same.
In an exemplary embodiment, in the process of moving the virtual sound source object, the present invention may perform scaling processing on the virtual sound source object according to a preset scaling size, for example, when the virtual sound source object moves along the motion trajectory, the virtual sound source object is also displayed in an enlarged manner according to a preset enlarging size.
In another exemplary embodiment, during the motion process of the virtual sound source object, the volume of the virtual sound source corresponding to the virtual sound source object is controlled to change along with the change of the motion trajectory, for example, the volume of the virtual sound source is controlled to be increased from small to large, that is, the volume of the virtual sound source is increased when the virtual sound source object is closer to the listener object.
In another exemplary embodiment, during the motion process of the virtual sound source object, the present invention may perform a rotation process of a preset angle on the virtual sound source object, for example, perform a rotation of a fixed angle/a fixed rotation angular velocity along a preset direction, and the present invention is not limited thereto.
It should be noted that, in a specific implementation, the three exemplary embodiments may be implemented separately, or any two or more exemplary embodiments may be implemented in combination, and the present invention is not limited thereto. For example, in an embodiment, during the motion process of the virtual sound source object, the volume of the virtual sound source object and the volume of the virtual sound source corresponding to the volume of the virtual sound source object may be simultaneously controlled, so that the sound attenuation of the virtual sound source object can change along with the change of the volume of the virtual sound source object.
It should be noted that, in the game engine, the volume of the virtual sound source object is modified through the key frame animation, so as to implement the scaling of the virtual sound source object. Furthermore, the present invention provides audio middleware (e.g., Wwise middleware) in the game engine to implement the embodiments of the present invention described in the above embodiments of fig. 1-2. In other words, the embodiments of the present invention can be applied to audio middleware in a game engine, such as Wwise middleware.
By implementing the embodiment of the invention, the relative position information between the virtual sound source object and the listener object in the current game scene is acquired, wherein the virtual sound source object is controlled by the virtual character, the virtual sound source object is provided with a virtual sound source and a collision bounding volume, and the collision bounding volume is used for representing the volume form of the virtual sound source. In the above solution, the position of the virtual sound source object is set according to the relative position information between the virtual sound source object and the listener object, so as to realize the three-dimensional sound effect of the virtual sound source object.
Based on the same inventive concept, another embodiment of the present invention provides a device and an electronic device corresponding to the method for setting a virtual sound source object according to the embodiments of the present invention.
Fig. 3 is a schematic structural diagram of a virtual sound source object setting device according to an embodiment of the present invention. The device 3 shown in fig. 3 comprises: an obtaining module 301 and a setting module 302, wherein:
the acquiring module 301 is configured to acquire relative position information between a virtual sound source object and a listener object in a current game scene, where the virtual sound source object is controlled by a virtual character, the virtual sound source object is provided with a virtual sound source and a collision bounding volume, and the collision bounding volume is used to represent a volume form of the virtual sound source;
the setting module 302 is configured to set the virtual sound source object to a target position according to the relative position information, so as to implement a three-dimensional sound effect of the virtual sound source.
Optionally, the setting module 302 is specifically configured to:
judging whether the position of the listener object is within the volume range of the virtual sound source object or not according to the relative position information;
if yes, setting the position of the virtual sound source object to the position of the listener object;
if not, setting the position of the virtual sound source object to a target position, wherein the target position is any target position in the volume range of the virtual sound source object.
Optionally, the target position is a position of a point on the surface of the virtual sound source object closest to the position of the listener object, so as to implement a function that the virtual sound source starts to attenuate from outside the volume range of the virtual sound source object.
Optionally, the apparatus further comprises a processing module 303, wherein:
the obtaining module 301 is further configured to obtain skeleton animation data of the virtual character, where the skeleton animation data includes a skeleton hierarchy of the virtual character;
the processing module 303 is configured to associate the virtual sound source object with the skeleton hierarchy of the virtual character, and implement a position change and a volume change of the virtual sound source object with respect to a target skeleton of the virtual character through the skeleton animation data of the virtual character.
Optionally, the processing module 303 is specifically configured to:
determining the motion trail of the virtual sound source object relative to the target skeleton according to the skeleton animation data of the virtual character;
and controlling the virtual sound source object to move along the motion trail, and synchronously changing the volume of the virtual sound source object.
Optionally, the processing module 303 is further specifically configured to:
moving the position of the virtual sound source object along the motion track, and zooming the volume of the virtual sound source object according to a preset zooming size during the moving;
and the position movement and the volume scaling of the virtual sound source object are transformed to the sound effect presentation of the virtual sound source object through an audio middleware, so that the sound effect of the virtual sound source object changes along with the position change and the volume change of the virtual sound source object.
Optionally, the processing module 303 is further configured to:
setting a face orientation of the listener object to be the same as an orientation of a virtual camera corresponding to the listener object.
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. The electronic device 40 shown in fig. 4 includes: at least one processor 401, a communication interface 402, a user interface 403 and a memory 404, wherein the processor 401, the communication interface 402, the user interface 403 and the memory 404 may be connected by a bus or other means, and the embodiment of the present invention is exemplified by being connected by a bus 405. Wherein the content of the first and second substances,
processor 401 may be a general-purpose processor such as a Central Processing Unit (CPU).
The communication interface 402 may be a wired interface (e.g., an ethernet interface) or a wireless interface (e.g., a cellular network interface or using a wireless local area network interface) for communicating with other terminals or websites. In this embodiment of the present invention, the communication interface 402 is specifically configured to obtain a role position, relative position information, and the like.
The user interface 403 may be a touch panel, including a touch screen and a touch screen, for detecting an operation instruction on the touch panel, and the user interface 403 may also be a physical button or a mouse. The user interface 403 may also be a display screen for outputting, displaying images or data.
The Memory 404 may include Volatile Memory (Volatile Memory), such as Random Access Memory (RAM); the Memory may also include a Non-Volatile Memory (Non-Volatile Memory), such as a Read-Only Memory (ROM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, HDD), or a Solid-State Drive (SSD); the memory 404 may also comprise a combination of memories of the kind described above. The memory 404 is used for storing a set of program codes, and the processor 401 is used for calling the program codes stored in the memory 404 and executing the following operations:
acquiring relative position information between a virtual sound source object and a listener object in a current game scene, wherein the virtual sound source object is controlled by a virtual character, the virtual sound source object is provided with a virtual sound source and a collision bounding volume, and the collision bounding volume is used for representing the volume form of the virtual sound source;
and setting the virtual sound source object to a target position according to the relative position information so as to realize the three-dimensional sound effect of the virtual sound source.
Optionally, the setting the virtual sound source object to the target position according to the relative position information includes:
judging whether the position of the listener object is within the volume range of the virtual sound source object or not according to the relative position information;
if yes, setting the position of the virtual sound source object to the position of the listener object;
if not, setting the position of the virtual sound source object to a target position, wherein the target position is any target position in the volume range of the virtual sound source object.
Optionally, the target position is a position of a point on the surface of the virtual sound source object closest to the position of the listener object, so as to implement a function that the virtual sound source starts to attenuate from outside the volume range of the virtual sound source object.
Optionally, the processor 401 is further configured to:
obtaining skeletal animation data of the virtual character, wherein the skeletal animation data comprises a skeletal hierarchy of the virtual character;
and associating the virtual sound source object with the skeleton hierarchical structure of the virtual character, and realizing the position change and the volume change of the virtual sound source object relative to the target skeleton of the virtual character through the skeleton animation data of the virtual character.
Optionally, the implementing, by the bone animation data of the virtual character, the position change and the volume change of the virtual sound source object relative to the target bone of the virtual character includes:
determining the motion trail of the virtual sound source object relative to the target skeleton according to the skeleton animation data of the virtual character;
and controlling the virtual sound source object to move along the motion trail, and synchronously changing the volume of the virtual sound source object.
Optionally, the controlling the virtual sound source object to move along the motion trajectory and synchronously changing the volume of the virtual sound source object includes:
moving the position of the virtual sound source object along the motion track, and zooming the volume of the virtual sound source object according to a preset zooming size during the moving;
and the position movement and the volume scaling of the virtual sound source object are transformed to the sound effect presentation of the virtual sound source object through an audio middleware, so that the sound effect of the virtual sound source object changes along with the position change and the volume change of the virtual sound source object.
Optionally, the processor 401 is further configured to:
setting a face orientation of the listener object to be the same as an orientation of a virtual camera corresponding to the listener object.
One or more technical solutions provided in the embodiments of the present invention have at least the following technical effects or advantages: the invention obtains the relative position information between a virtual sound source object and a listener object in the current game scene, wherein the virtual sound source object is provided with a virtual sound source and a collision bounding volume, and the collision bounding volume is used for representing the volume form of the virtual sound source. In the above solution, the position of the virtual sound source object is set according to the relative position information between the virtual sound source object and the listener object, so as to realize the three-dimensional sound effect of the virtual sound source object.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A method for setting a virtual sound source object, the method comprising:
acquiring relative position information between a virtual sound source object and a listener object in a current game scene, wherein the virtual sound source object is controlled by a virtual character, the virtual sound source object is provided with a virtual sound source and a collision bounding volume, and the collision bounding volume is used for representing the volume form of the virtual sound source;
and setting the virtual sound source object to a target position according to the relative position information so as to realize the three-dimensional sound effect of the virtual sound source.
2. The method of claim 1, wherein the setting the virtual sound source object to the target position according to the relative position information comprises:
judging whether the position of the listener object is within the volume range of the virtual sound source object or not according to the relative position information;
if yes, setting the position of the virtual sound source object to the position of the listener object;
if not, setting the position of the virtual sound source object to a target position, wherein the target position is any target position in the volume range of the virtual sound source object.
3. The method of claim 2, wherein the target position is a position of a point on the surface of the virtual audio source object closest to the position of the listener object, so as to implement the function of attenuating the virtual audio source from outside the volume of the virtual audio source object.
4. The method of claim 1, further comprising:
obtaining skeletal animation data of the virtual character, wherein the skeletal animation data comprises a skeletal hierarchy of the virtual character;
and associating the virtual sound source object with the skeleton hierarchical structure of the virtual character, and realizing the position change and the volume change of the virtual sound source object relative to the target skeleton of the virtual character through the skeleton animation data of the virtual character.
5. The method of claim 4, wherein said enabling the change in position and the change in volume of the virtual audio source object relative to the target bone of the virtual character through the bone animation data of the virtual character comprises:
determining the motion trail of the virtual sound source object relative to the target skeleton according to the skeleton animation data of the virtual character;
and controlling the virtual sound source object to move along the motion trail, and synchronously changing the volume of the virtual sound source object.
6. The method according to claim 5, wherein said controlling the motion of the virtual sound source object along the motion trajectory and synchronously performing the change processing on the volume of the virtual sound source object comprises:
moving the position of the virtual sound source object along the motion track, and zooming the volume of the virtual sound source object according to a preset zooming size during the moving;
and the position movement and the volume scaling of the virtual sound source object are transformed to the sound effect presentation of the virtual sound source object through an audio middleware, so that the sound effect of the virtual sound source object changes along with the position change and the volume change of the virtual sound source object.
7. The method according to any one of claims 1-6, further comprising:
setting a face orientation of the listener object to be the same as an orientation of a virtual camera corresponding to the listener object.
8. The utility model provides a virtual sound source object setting device which characterized in that, the device includes acquisition module and setting module, wherein:
the acquisition module is used for acquiring relative position information between a virtual sound source object and a virtual character in a current game scene, wherein the virtual sound source object is provided with a virtual sound source and a collision bounding volume, and the collision bounding volume is used for representing the volume form of the virtual sound source;
and the setting module is used for setting the virtual sound source object to a target position according to the relative position information so as to realize the three-dimensional sound effect of the virtual sound source.
9. An electronic device, characterized in that the electronic device comprises: a processor, a memory, a communication interface, and a bus; the processor, the memory and the communication interface are connected through the bus and complete mutual communication; the memory stores executable program code; the processor executes a program corresponding to the executable program code by reading the executable program code stored in the memory, for executing the virtual sound source object setting method as recited in any one of claims 1 to 7 above.
10. A computer-readable storage medium characterized by storing a program which, when executed on an electronic device, executes the virtual sound source object setting method according to any one of claims 1 to 7.
CN202111669705.9A 2021-12-30 2021-12-30 Virtual sound source object setting method and device, electronic equipment and medium Pending CN114288656A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111669705.9A CN114288656A (en) 2021-12-30 2021-12-30 Virtual sound source object setting method and device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111669705.9A CN114288656A (en) 2021-12-30 2021-12-30 Virtual sound source object setting method and device, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN114288656A true CN114288656A (en) 2022-04-08

Family

ID=80974155

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111669705.9A Pending CN114288656A (en) 2021-12-30 2021-12-30 Virtual sound source object setting method and device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN114288656A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114917585A (en) * 2022-06-24 2022-08-19 四川省商投信息技术有限责任公司 Sound effect generation method and system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114917585A (en) * 2022-06-24 2022-08-19 四川省商投信息技术有限责任公司 Sound effect generation method and system

Similar Documents

Publication Publication Date Title
CN109462776B (en) Video special effect adding method and device, terminal equipment and storage medium
CN108597530B (en) Sound reproducing method and apparatus, storage medium and electronic apparatus
CN109690450B (en) Role simulation method in VR scene and terminal equipment
US7184049B2 (en) Image processing method and system
CN110465097B (en) Character vertical drawing display method and device in game, electronic equipment and storage medium
US20110293144A1 (en) Method and System for Rendering an Entertainment Animation
JP7008730B2 (en) Shadow generation for image content inserted into an image
US20170329503A1 (en) Editing animations using a virtual reality controller
JP3853329B2 (en) GAME PROGRAM AND GAME DEVICE
JP2009237680A (en) Program, information storage medium, and image generation system
CN111467804B (en) Method and device for processing hit in game
JP2008165584A (en) Image processor, and control method and program for image processor
CN103324488A (en) Method and device for obtaining special effect information
CN114288656A (en) Virtual sound source object setting method and device, electronic equipment and medium
KR101757765B1 (en) System and method for producing 3d animation based on motioncapture
US20230267664A1 (en) Animation processing method and apparatus, electronic device and storage medium
US7932903B2 (en) Image processor, image processing method and information storage medium
CN115461707B (en) Video acquisition method, electronic device and storage medium
CN108897425B (en) Data processing method and electronic equipment
JP2002092637A (en) Game system and information storage medium
JP2017148592A (en) Game system
JP5956520B2 (en) Portable terminal device, program, and moving image display method
CN115888086A (en) Game interaction method, device, equipment and storage medium
JP2024041356A (en) Program, virtual space generation device, and virtual space generation method
CN115177953A (en) Object processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination