CN112717395A - Audio binding method, device, equipment and storage medium - Google Patents

Audio binding method, device, equipment and storage medium Download PDF

Info

Publication number
CN112717395A
CN112717395A CN202110121424.3A CN202110121424A CN112717395A CN 112717395 A CN112717395 A CN 112717395A CN 202110121424 A CN202110121424 A CN 202110121424A CN 112717395 A CN112717395 A CN 112717395A
Authority
CN
China
Prior art keywords
audio
target
playing
unit
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110121424.3A
Other languages
Chinese (zh)
Other versions
CN112717395B (en
Inventor
徐凯
马继超
吴荣佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110121424.3A priority Critical patent/CN112717395B/en
Publication of CN112717395A publication Critical patent/CN112717395A/en
Application granted granted Critical
Publication of CN112717395B publication Critical patent/CN112717395B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/54Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Stereophonic System (AREA)

Abstract

The application discloses an audio binding method, an audio binding device, audio binding equipment and a storage medium, and belongs to the field of audio processing. According to the technical scheme provided by the embodiment of the application, the terminal can load the audio configuration information generated by the second application through the first application, the target audio unit and the target virtual object are bound based on the audio configuration information, the audio configuration information comprises the playing parameters and the audio storage position of the audio unit, complex and tedious resource loading and parameter setting work is not needed, the audio resource can be loaded and the playing parameters can be acquired by directly loading the audio configuration information through the first application, and the efficiency of configuring audio for the virtual object is improved.

Description

Audio binding method, device, equipment and storage medium
Technical Field
The present application relates to the field of audio processing, and in particular, to an audio binding method, apparatus, device, and storage medium.
Background
With the development of multimedia technology, more and more game types and more functions are provided. In order to provide a more realistic game experience for players, technicians have endeavored to improve not only the fineness of game screens but also the reality of game audio.
In the related art, technicians often produce game audio through application, that is, game audio is produced simultaneously in the process of constructing models and maps of different objects in a game.
However, in the process of making game audio, a large amount of complicated and cumbersome resource loading and parameter setting work is involved, which results in low efficiency of the technician making the game audio.
Disclosure of Invention
The embodiment of the application provides an audio binding method, an audio binding device, audio binding equipment and a storage medium, so that the authenticity of audio is improved, and the configuration efficiency of the audio is improved. The technical scheme is as follows:
in one aspect, an audio binding method is provided, and the method includes:
loading audio configuration information generated by a second application through a first application, wherein the audio configuration information comprises playing parameters of a plurality of audio units and a storage position of at least one audio in each audio unit;
acquiring a target audio unit selected from the audio configuration information and a target virtual object selected from a virtual scene;
and binding the playing parameters of the target audio unit, the storage position of at least one audio in the target audio unit and the target virtual object.
In a possible implementation, after the playing the at least one audio based on the plurality of virtual players, the method further includes:
under the condition that the plurality of virtual players play simultaneously, in response to receiving a play instruction of any audio in the at least one audio, controlling the virtual player which plays earliest in the plurality of virtual players to stop playing the current audio;
and controlling the virtual player which plays at the earliest time to play any audio.
In a possible implementation, the plurality of audio units belong to at least two audio unit groups, and playing the at least one audio based on the playback parameter of the target audio unit comprises:
acquiring unit group volume of an audio unit group to which the target audio unit belongs and branch volume of the target audio unit in the audio unit group from the playing parameters;
determining a playback volume of the at least one audio based on the cell group volume and the branch volume;
and playing the at least one audio at the playing volume.
In a possible implementation, the playing the at least one audio based on the playing parameter of the target audio unit includes:
acquiring a front audio of an audio unit from the audio configuration information, wherein the front audio is an audio played before the at least one audio is played;
playing the prepositive audio;
and responding to the end of the preposed audio playing, and playing the at least one audio.
In one aspect, an audio binding apparatus is provided, the apparatus comprising:
the loading module is used for loading audio configuration information generated by a second application through a first application, wherein the audio configuration information comprises the playing parameters of a plurality of audio units and the storage position of at least one audio in each audio unit;
a first obtaining module, configured to obtain a target audio unit selected from the audio configuration information and a target virtual object selected from a virtual scene;
and the binding module is used for binding the playing parameters of the target audio unit, the storage position of at least one audio frequency in the target audio unit and the target virtual object.
In a possible implementation manner, the loading module is configured to display, by the first application, a configuration information loading interface of the virtual scene, where the configuration information loading interface includes at least one information identifier, and each information identifier refers to one piece of audio configuration information generated by the second application; and in response to the detection of the selection operation of any information identifier, loading the audio configuration information referred by the any information identifier.
In a possible embodiment, the apparatus further comprises:
the switching module is used for switching the configuration information loading interface into an audio binding interface through the first application, wherein the audio binding interface comprises an audio unit selection control and a virtual object selection control;
the loading module is further configured to, in response to detecting a trigger operation on the audio unit selection control, display an audio unit selection interface, where audio identifiers of the multiple audio units are displayed in the audio unit selection interface, and each audio identifier is used to refer to one of the multiple audio units; in response to detecting any audio identifier selection operation, acquiring the target audio unit referred by any audio identifier; in response to detecting a triggering operation on the virtual object selection control, displaying a virtual object selection interface, where object identifiers of multiple virtual objects in the virtual scene are displayed in the virtual object selection interface, and each object identifier is used to refer to one virtual object in the virtual scene; in response to the detection of the selection operation of any object identifier, acquiring the target virtual object corresponding to the any object identifier;
the loading module is further configured to further include an audio binding control on the audio binding interface, and the binding module is configured to bind the playing parameter of the target audio unit and the storage location of at least one audio in the target audio unit with the target virtual object in response to detecting a trigger operation on the audio binding control.
In a possible implementation, the audio configuration information generating device includes:
a second obtaining module, configured to obtain, through an audio editing interface of the second application, the playing parameters of the multiple audio units and a storage location of at least one audio in each of the audio units, which are input in the audio editing interface;
the first generation module is used for responding to configuration information generation operation on the audio editing interface and generating the audio configuration information based on the playing parameters of the audio units and the storage position of at least one audio in each audio unit.
In a possible embodiment, the apparatus further comprises:
and the dragging module is used for responding to dragging operation of any audio unit in the first audio unit group and transferring the any audio unit from the first audio unit group to a second audio unit group corresponding to the end position of the dragging operation.
In a possible embodiment, the apparatus further comprises:
a second generating module, configured to generate a scene configuration file of the virtual scene based on the plurality of virtual objects in the virtual scene, the playing parameters of the audio unit respectively bound to the plurality of virtual objects, and a storage location of at least one audio in the audio unit, where the scene configuration file is used to construct the virtual scene.
In a possible embodiment, the apparatus further comprises:
a first playing module, configured to load the at least one audio from a storage location of the at least one audio in response to detecting a playing test instruction of the target audio unit; playing the at least one audio based on the playing parameters of the target audio unit.
In a possible embodiment, the apparatus further comprises:
a second playing module, configured to obtain a first duration, a second duration, a third duration, and a target volume from the playing parameters, where the target volume is a maximum volume for playing the at least one audio, the first duration is a duration for gradually increasing from a lowest volume to the target volume when the at least one audio is played, the second duration is a duration for gradually decreasing from the target volume to the lowest volume when the at least one audio is played, and the third duration is a duration for playing with the target volume; controlling the at least one audio to gradually increase from the lowest volume to the target volume within the first duration; and in response to the time length that the at least one audio is played at the target volume reaching the third time length, controlling the at least one audio to gradually decrease from the target volume to the lowest volume within the second time length.
In a possible embodiment, the apparatus further comprises:
a third playing module, configured to obtain a playing weight of the at least one audio from the playing parameter, where the playing weight is used to represent a playing probability of the audio; determining a target audio from the at least one audio based on the playback weight; and playing the target audio.
In a possible embodiment, the apparatus further comprises:
a fourth playing module, configured to obtain a playing distance of the target audio unit from the playing parameter, where the playing distance is a maximum influence distance of the at least one audio in the virtual scene; and responding to the fact that the distance between the target virtual object and a controlled virtual object is smaller than or equal to the playing distance, and playing the at least one audio, wherein the controlled virtual object is a virtual object controlled by a local terminal.
In a possible embodiment, the apparatus further comprises:
a fifth playing module, configured to obtain a maximum playing time of the target audio unit from the playing parameters, where the maximum playing time is a maximum number of audio played at the same time; creating an audio pool of the audio unit, wherein the audio pool comprises a plurality of virtual players with the same number as the maximum playing times; playing the at least one audio based on the plurality of virtual players.
In a possible implementation manner, the fifth playing module is further configured to, in a case that the plurality of virtual players play simultaneously, in response to receiving a playing instruction for any audio in the at least one audio, control a virtual player that plays earliest in the plurality of virtual players to stop playing a current audio; and controlling the virtual player which plays at the earliest time to play any audio.
In a possible implementation, the plurality of audio units belong to at least two groups of audio units, the apparatus further comprising:
a sixth playing module, configured to obtain, from the playing parameter, a unit group volume of an audio unit group to which the target audio unit belongs and a branch volume of the target audio unit in the audio unit group; determining a playback volume of the at least one audio based on the cell group volume and the branch volume; and playing the at least one audio at the playing volume.
In a possible embodiment, the apparatus further comprises:
a seventh playing module, configured to obtain a pre-audio of an audio unit from the audio configuration information, where the pre-audio is an audio that is played before the at least one audio is played; playing the prepositive audio; and responding to the end of the preposed audio playing, and playing the at least one audio.
In one aspect, a computer device is provided that includes one or more processors and one or more memories having at least one computer program stored therein, the computer program being loaded and executed by the one or more processors to implement the audio binding method.
In one aspect, a computer-readable storage medium having at least one computer program stored therein is provided, the computer program being loaded and executed by a processor to implement the audio binding method.
In one aspect, a computer program product or a computer program is provided, the computer program product or the computer program comprising program code, the program code being stored in a computer-readable storage medium, the program code being read by a processor of a computer device from the computer-readable storage medium, the program code being executed by the processor such that the computer device performs the audio binding method described above.
According to the technical scheme, the terminal can load the audio configuration information generated by the second application through the first application, the target audio unit and the target virtual object are bound based on the audio configuration information, the audio configuration information comprises the playing parameters and the audio storage position of the audio unit, complex and tedious resource loading and parameter setting work is not needed, the audio resource can be loaded and the playing parameters can be acquired by directly loading the audio configuration information through the first application, and the efficiency of configuring audio for the virtual object is improved.
Drawings
In order to illustrate the technical solutions in the embodiments of the present application more clearly, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic diagram of an implementation environment of an audio binding method provided by an embodiment of the present application;
fig. 2 is a flowchart of an audio binding method provided in an embodiment of the present application;
fig. 3 is a flowchart of an audio binding method provided in an embodiment of the present application;
fig. 4 is a flowchart of an audio binding method provided in an embodiment of the present application;
FIG. 5 is a schematic view of an interface provided by an embodiment of the present application;
FIG. 6 is a schematic view of an interface provided by an embodiment of the present application;
FIG. 7 is a schematic view of an interface provided by an embodiment of the present application;
FIG. 8 is a schematic view of an interface provided by an embodiment of the present application;
FIG. 9 is a schematic view of an interface provided by an embodiment of the present application;
FIG. 10 is a schematic view of an interface provided by an embodiment of the present application;
FIG. 11 is a logic diagram for creating an audio pool according to an embodiment of the present application;
FIG. 12 is a logical block diagram provided in an embodiment of the present application;
FIG. 13 is a schematic diagram of a logical relationship provided by an embodiment of the present application;
fig. 14 is a flowchart illustrating playing audio by a playing interface according to an embodiment of the present application;
FIG. 15 is a flowchart of a method for invoking a virtual object sonification control interface according to an embodiment of the present disclosure;
fig. 16 is a flowchart illustrating playing audio by a playing interface according to an embodiment of the present application;
FIG. 17 is a logic block diagram provided by an embodiment of the present application;
FIG. 18 is a logic block diagram provided by an embodiment of the present application;
fig. 19 is a flowchart of an audio binding method provided in an embodiment of the present application;
fig. 20 is a schematic structural diagram of an audio binding apparatus according to an embodiment of the present application;
fig. 21 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The terms "first," "second," and the like in this application are used for distinguishing between similar items and items that have substantially the same function or similar functionality, and it should be understood that "first," "second," and "nth" do not have any logical or temporal dependency or limitation on the number or order of execution.
The term "at least one" in this application means one or more, "a plurality" means two or more, for example, a plurality of reference face images means two or more reference face images.
Unity 3D: unity3D is a cross-platform 2D/3D game engine, which can be used to develop stand-alone games for Windows, MacOS and Linux platforms, video games for game host platforms such as PlayStation, XBox, Wii, 3DS and nintendo Switch, and games for mobile devices such as iOS and Android.
3D sound: the finger sound varies with the relative position and orientation of the sound producing body and Listener.
Virtual scene: is a virtual scene that is displayed (or provided) by an application program when the application program runs on a terminal. The virtual scene may be a simulation environment of a real world, a semi-simulation semi-fictional virtual environment, or a pure fictional virtual environment. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional virtual scene, and the dimension of the virtual scene is not limited in the embodiment of the present application. For example, a virtual scene may include sky, land, ocean, etc., the land may include environmental elements such as deserts, cities, etc., and a user may control a virtual object to move in the virtual scene.
Virtual object: refers to a movable object in a virtual scene. The movable object can be a virtual character, a virtual animal, an animation character, etc., such as: characters, animals, plants, oil drums, walls, stones, etc. displayed in the virtual scene. The virtual object may be an avatar in the virtual scene that is virtual to represent the user. The virtual scene may include a plurality of virtual objects, each virtual object having its own shape and volume in the virtual scene and occupying a portion of the space in the virtual scene.
Alternatively, the virtual object may be a user Character controlled by an operation on the client, an Artificial Intelligence (AI) set in a virtual scene battle by training, or a Non-user Character (NPC) set in a virtual scene interaction. Alternatively, the virtual object may be a virtual character playing a game in a virtual scene. Optionally, the number of virtual objects participating in the interaction in the virtual scene may be preset, or may be dynamically determined according to the number of clients participating in the interaction.
Virtual object: the virtual object is an obstacle which obstructs the passage of a virtual object in a virtual scene, and optionally, the virtual object includes a virtual door, a virtual firearm, a virtual roadblock, a virtual box, a virtual window, a virtual vehicle, a virtual tree, and the like.
Fig. 1 is a schematic diagram of an implementation environment of an audio binding method provided in an embodiment of the present application, and referring to fig. 1, the implementation environment may include a terminal 110 and a server 140.
The terminal 110 is connected to the server 140 through a wireless network or a wired network. Optionally, the terminal 110 is a smartphone, a tablet, a laptop, a desktop computer, etc., but is not limited thereto. The terminal 110 is installed and operated with an application program supporting audio editing and virtual scene editing.
Optionally, the server is an independent physical server, or a server cluster or distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud service, cloud database, cloud computing, cloud function, cloud storage, web service, cloud communication, middleware service, domain name service, security service, distribution Network (CDN), big data and artificial intelligence platform, and the like.
Optionally, the terminal 110 generally refers to one of a plurality of terminals, and the embodiment of the present application is illustrated by the terminal 110.
Those skilled in the art will appreciate that the number of terminals described above may be greater or fewer. For example, the number of the terminal is only one, or several tens or hundreds, or more, and in this case, other terminals are also included in the implementation environment. The number of terminals and the type of the device are not limited in the embodiments of the present application.
After the implementation environment provided by the embodiment of the present application is introduced, an application scenario of the embodiment of the present application is introduced below.
The technical scheme provided by the embodiment of the application can be applied to a game making scene, in some embodiments, a sound designer can make a game audio through a second application, and a function developer can make construction of a scene in a game and realization of a function through a first application. The process of making game audio by sound designers is that different playing parameters are configured for different game audio according to the conditions of different objects in the game, so that the playing effect of the game audio is improved. Referring to fig. 2, the sound designer can configure playing parameters for the game audio through the second application, and the second application integrates the playing parameters of the game audio into an audio configuration file for the first application to load. The function developer can load the audio configuration file through the first application, so that the playing parameters of different game audios can be quickly obtained, the function developer does not need to configure the playing parameters by oneself, and only needs to bind the playing parameters of the different game audios and the corresponding virtual objects, so that the workload of the function developer is reduced, and the efficiency of game audio production is improved.
In the above description, the sound designer and the function developer are taken as examples, but in other possible embodiments, the sound designer and the function developer may be the same person, and in the following description, a technician is used to refer to the sound designer, the function developer, and a person in the game production process, which is not limited in the embodiment of the present application.
Fig. 3 is a flowchart of an audio binding method provided in an embodiment of the present application, and referring to fig. 3, the method includes:
301. the terminal loads audio configuration information generated by the second application through the first application, wherein the audio configuration information comprises playing parameters of a plurality of audio units and a storage position of at least one audio in each audio unit.
The first application is an application for editing a virtual scene, the second application is an application for editing an audio unit, and the first application and the second application are different applications. An audio unit is a collection of multiple audios, and one audio unit can be bound to one or more virtual objects in a virtual scene. In some embodiments, the audio configuration information is an audio profile generated by the second application, the audio profile being loadable by the first application. The playing parameter is used to indicate a playing mode of at least one audio in the audio unit, and taking the case that the playing parameter includes a volume, the terminal can play the at least one audio with the volume.
302. The terminal acquires a target audio unit selected from the audio configuration information and a target virtual object selected from the virtual scene.
In some embodiments, the virtual scene is also referred to as a game scene, and the virtual object is an object such as a virtual stone, a virtual house, and a virtual gun in the game scene, which is not limited in this application.
303. And the terminal binds the playing parameter of the target audio unit, the storage position of at least one audio in the target audio unit and the target virtual object.
In some embodiments, during the game production process, the audio required to be loaded in the game is often stored in one folder, the audio configuration information can include storage locations of different audios, and the first application can load different audios based on the storage locations of the different audios.
According to the technical scheme, the terminal can load the audio configuration information generated by the second application through the first application, the target audio unit and the target virtual object are bound based on the audio configuration information, the audio configuration information comprises the playing parameters and the audio storage position of the audio unit, complex and tedious resource loading and parameter setting work is not needed, the audio resource can be loaded and the playing parameters can be acquired by directly loading the audio configuration information through the first application, and the efficiency of configuring audio for the virtual object is improved.
Fig. 4 is a flowchart of an audio binding method provided in an embodiment of the present application, and referring to fig. 4, the method includes:
401. the terminal loads audio configuration information generated by the second application through the first application, wherein the audio configuration information comprises playing parameters of a plurality of audio units and a storage position of at least one audio in each audio unit.
The terminal can configure different action parameters for different virtual objects in the virtual scene through the first application, and the action parameters are used for indicating actions executed when the corresponding virtual objects are triggered, or the terminal can set the positions of the virtual objects in the virtual scene through the first application. For example, the technician can build models of different virtual objects in the virtual scene through the modeling application, import the models of the different virtual objects into the first application, and the technician can configure action parameters for the models of the different virtual objects through the first application, and of course, the technician can also set positions of the models of the different virtual objects in the virtual scene through the first application, and optionally, the first application is Unity 3D. In some embodiments, the motion parameter includes an audio parameter, a deformation parameter, a display effect parameter, and the like, which is not limited in this application. The second application is an application for editing an audio unit, and in some embodiments, referring to fig. 5, a technician can create a plurality of audio units through an audio unit creation control 502 in an application interface 501 of the second application, and can also set different playing parameters for the audio units through a parameter setting area 503 in the application interface 501.
In a possible implementation manner, the terminal displays a configuration information loading interface of the virtual scene through the first application, where the configuration information loading interface includes at least one information identifier, and each information identifier refers to one piece of audio configuration information generated by the second application. And in response to the detection of the selection operation of any information identifier, loading the audio configuration information referred by the information identifier. Optionally, the audio configuration information is an audio profile.
The terminal may store a plurality of pieces of audio configuration information, different pieces of audio configuration information are applicable to different virtual scenes, and when the technician generates the audio configuration information through the second application, the technician may set an information identifier for the audio configuration information based on the virtual scene to which the audio configuration information is applicable, for example, the information identifier for the audio configuration information applicable to the virtual scene a is set to "audio configuration information for the virtual scene a" or "information identifier for the virtual scene a," of course, the determination of the information identifier for the audio configuration information is performed with a purpose of being able to distinguish different pieces of audio configuration information, and the technician may also set the information identifier for the audio configuration information in other manners, which is not limited in the embodiment of the present application.
Under the embodiment, technicians can quickly select the configuration information to be used through the configuration information loading interface displayed on the first application, and the human-computer interaction efficiency is high.
For example, referring to fig. 6, an audio binding control 602 is displayed in an audio binding interface 601 of the first application. In response to detecting the triggering operation of the audio binding control 602, the terminal displays a configuration information loading interface 603 of the virtual scene, wherein the configuration information loading interface comprises at least one information identifier. In response to detecting a selection operation on any information identifier 604, the audio configuration information referred to by the information identifier 604 is loaded.
In order to more clearly describe the technical solution provided in the embodiment of the present application, a method for generating audio configuration information by a second application is described below:
in a possible implementation manner, the terminal acquires, through the audio editing interface of the second application, the playing parameters of the plurality of audio units input in the audio editing interface and the storage location of at least one audio in each audio unit. And responding to the configuration information generation operation on the audio editing interface, and generating audio configuration information by the terminal through a second application based on the playing parameters of the audio units and the storage position of at least one audio in each audio unit.
In this embodiment, the terminal can acquire the playing parameters of different audio units through the audio editing interface of the second application, integrate the playing parameters of different audio units to generate audio configuration information, and the subsequent first application can directly import the audio configuration information generated by the second application and play the audio in the audio unit based on the audio configuration information.
For example, referring to fig. 7, the terminal displays an audio editing interface 701 of the second application, where the audio editing interface 701 includes a parameter setting area 702 and a toolbar 703, a technician can configure playing parameters for different audio units in the parameter setting area 702, and the second application can acquire the playing parameters in the parameter setting area 702, and optionally, the playing parameters include at least one of the following: the technical staff can set at least one playing parameter through the parameter setting area 702, such as the playing order of the audio in the audio unit, the maximum playing times of the audio in the audio unit, the playing weight of the audio in the audio unit, the unit group volume of the audio unit group to which the audio unit belongs, the branch volume of the audio in the audio unit, the delay playing time of the audio in the audio unit, and the pre-playing audio of the audio unit. The toolbar 703 includes a plurality of functionality controls, and in some embodiments, the toolbar 703 includes an audio unit creation control 7031 that a technician can create by triggering the audio unit creation control 7031. The toolbar 703 further includes a play control 7032 and a stop control 7033, and a technician can play the audio in the audio unit with the configured play parameters by triggering the play control 7032 to verify the playing effect of the audio; the technician can stop the playing of the audio by triggering the stop control 7033.
For the storage location of the audio, a technician can input the storage location of at least one audio in each audio unit through the parameter setting area 702, and can also automatically input the storage location of the audio by dragging the audio file into the parameter setting area 702, during the process of dragging the audio file, the terminal can load the storage location of the audio file in the memory, and in response to dragging the audio file to the parameter setting area 702, the terminal obtains the storage location of the audio file from the memory through the second application, and fills the storage location in the corresponding location of the parameter setting area 702.
Referring to fig. 7, a generation control 704 is displayed on the audio editing interface 701, and in response to detecting a triggering operation on the generation control 704, the terminal generates audio configuration information based on the playing parameters of the plurality of audio units and the storage location of at least one audio in each audio unit, and in some embodiments, the audio configuration information is an audio configuration file, and a technician can copy the audio configuration file.
On the basis of the foregoing embodiment, optionally, an audio unit group to which a plurality of audio units belong is displayed on the audio editing interface, and in response to a drag operation on any audio unit in the first audio unit group, the terminal transfers the audio unit from the first audio unit group to a second audio unit group corresponding to an end position of the drag operation through the second application. Optionally, the audio units in the same audio unit group correspond to the same virtual object in the virtual scene, and when the virtual object is in different states, the terminal can load different audio units from the same audio group to play. Optionally, in order to make the audio played in the virtual scene more realistic, the terminal may bind, through the first application, one virtual object with a plurality of audio units, where the plurality of audio units correspond to a state in which the virtual object is located.
When the virtual object is in any state, different audio units may be loaded for different terminals, for example, if a virtual object a and a virtual object B exist in a virtual scene, the virtual object a holds a virtual gun. The first terminal controlling the virtual object A and the second terminal controlling the virtual object B can perform data intercommunication through the server. For the first terminal controlling the virtual object a, in response to detecting that the virtual firearm held by the virtual object a is in a firing state, the first terminal determines that the virtual object using the virtual firearm is the currently controlled virtual object a, and the first terminal can play a first audio unit, that is, an audio unit configured for the virtual firearm and playing when the currently controlled virtual object uses the virtual firearm, so that a user using the first terminal can hear a playing effect of the first audio unit. In response to detecting that the virtual gun held by the virtual object a is in a firing state, the server can send firing information to the second terminal, wherein the firing information carries an identifier of the virtual gun. In response to receiving the firing information, the second terminal can determine that the virtual object using the virtual firearm is not the currently controlled virtual object B, and the second terminal obtains the identification of the virtual firearm from the firing information. The second terminal can load a second audio unit corresponding to the virtual gun based on the identifier of the virtual gun, where the second audio unit is also configured for the virtual gun and is played when the virtual gun is used by a virtual object that is not currently controlled. Conversely, if the virtual object B uses the virtual gun in the virtual scene, the second terminal can play the first audio unit, and the first terminal plays the second audio unit. In some embodiments, the group of audio units is also referred to as an audio bus.
Under the embodiment, when a technician needs to move one audio unit from one audio unit group to another audio unit group, the complicated operation is not needed, the audio in different audio unit groups can be quickly moved only through the second application, and the human-computer interaction efficiency is high.
For example, referring to fig. 7, an audio unit group display area 705 is further displayed on the audio editing interface 701, and a plurality of audio unit groups are displayed in the audio unit group display area 705, where each audio unit group includes at least one audio unit. In response to detecting a drag operation on any one of the audio units 7052 in the first group of audio units 7051, the terminal transfers the audio unit 7052 from the first group of audio units 7051 to a second group of audio units 7053 corresponding to an end position of the drag operation through a second application.
The meaning of the audio unit transition is explained below by way of an example.
If the technician configures the audio unit group G for the virtual firearm1And audio unit group G2Wherein the audio unit group G1An audio unit group G corresponding to the first terminal2The first terminal is a terminal for controlling the virtual object to use the virtual gun, and the second terminal is a terminal for controlling other virtual objects. When the virtual object controlled by the first terminal uses the virtual gun, the first terminal can play the audio unit group G1If the virtual object controlled by the second terminal is located near the virtual object controlled by the first terminal, the second terminal can play the audio unit group G2The audio of (1). When the audio unit group G1The audio a in (1) is more suitable to exist in the soundGroup of frequency units G2Then, the technician can drag the audio a to the audio unit group G through the second application2
Optionally, the audio editing interface further includes a log recording area, and the log recording area is used for recording the operation performed on the audio editing interface and the user identifier for performing the operation.
In this embodiment, the second application can record the editing operation on the audio unit and the edited user identifier through the log recording area, so as to facilitate record backtracking.
For example, referring to fig. 7, the audio editing interface 701 further includes a logging area 706, and in response to detecting that any account passes through the audio editing operation of the second application, the account and the corresponding audio editing operation are displayed in the logging area 706. In some embodiments, the logging area 706 can also display the time of the audio editing operation.
402. The terminal acquires a target audio unit selected from the audio configuration information and a target virtual object selected from the virtual scene.
In a possible implementation manner, the terminal switches the configuration information loading interface into an audio binding interface through the first application, and the audio binding interface includes an audio unit selection control and a virtual object selection control. In response to the detection of the triggering operation of the audio unit selection control, the terminal displays an audio unit selection interface through the first application, wherein audio identifiers of a plurality of audio units are displayed in the audio unit selection interface, and each audio identifier is used for indicating one audio of one or more audio units. And responding to the detection of the selection operation of any audio identifier, and acquiring a target audio unit referred by any audio identifier by the terminal through the first application. In response to the detection of the triggering operation of the virtual object selection control, the terminal displays a virtual object selection interface through a first application, wherein object identifications of a plurality of virtual objects in a virtual scene are displayed in the virtual object selection interface, and each object identification is used for referring to one virtual object in the virtual scene. In response to the detection of the selection operation of any object identifier, the terminal acquires a target virtual object corresponding to any object identifier through the first application.
Under the embodiment, technicians can quickly select the target audio unit and the target virtual object through the first application, and the efficiency of human-computer interaction is high.
For example, referring to fig. 6 and 8, the terminal can switch the configuration information loading interface 603 to the audio binding interface 801, and the audio binding interface 801 includes an audio unit selection control 802 and a virtual object selection control 803. In response to detecting the triggering operation of the audio unit selection control 802, referring to fig. 9, the terminal switches the audio binding interface 801 to an audio unit selection interface 901 through the first application, where audio identifiers of a plurality of audio units are displayed in the audio unit selection interface 901. In response to detecting a selection operation on any one of the audio identifiers 902, the terminal acquires, through the first application, a target audio unit to which the audio identifier 902 refers. The terminal switches the audio unit selection interface 901 to the audio binding interface 801 through the first application, and in response to detecting a trigger operation on the virtual object selection control 803, referring to fig. 10, the terminal displays a virtual object selection interface 1001 through the first application, where object identifiers of a plurality of virtual objects are displayed in the virtual object selection interface. In response to detecting a selection operation of any object identifier 1002, the terminal acquires, through the first application, a target virtual object to which the object identifier 1002 refers.
403. And the terminal binds the playing parameter of the target audio unit, the storage position of at least one audio in the target audio unit and the target virtual object.
In a possible implementation manner, the audio binding interface further includes an audio binding control, and in response to detecting a trigger operation on the audio binding control, the terminal binds, through the first application, the playing parameter of the target audio unit and the storage location of at least one audio in the target audio unit with the target virtual object.
For a technician, referring to fig. 8, the audio binding interface 801 further includes an audio binding control 804, and in response to detecting a triggering operation on the audio binding control 804, the terminal binds, by the first application, the playing parameter of the target audio unit and the storage location of at least one audio in the target audio unit with the target virtual object.
For the terminal, if the identifier of the target Audio unit is 113, the target virtual Object is Object a, the playing parameters of the target Audio unit are (1, 2, 3, 4, 5, 6), where different numbers represent different playing parameters, the target Audio unit includes two audios, and the storage locations of the two audios are Audio/Sample/UI (user interface)/a and Audio/Sample/UI/B, respectively, then the terminal can bind the Object a and 113, (1, 2, 3, 4, 5, 6), Audio/Sample/UI/a, and Audio/Sample/UI/B through the first application.
Optionally, in addition to the process of binding the playing parameter of the target Audio unit and the storage location of at least one Audio in the target Audio unit with the target virtual object through the first application, the terminal can also bind the playing parameter of the target Audio unit, the storage location of at least one Audio in the target Audio unit, and different states of the target virtual object with the target Audio unit, for example, if the target Audio unit is identified as 113, the target virtual object is a virtual firearm and is identified as a Weapon a, the playing parameter of the target Audio unit is (1, 2, 3, 4, 5, 6), where different numbers represent different playing parameters, the target Audio unit includes two audios, the storage locations of the two audios are Audio/Sample/UI/a and Audio/Sample/UI/B, respectively, the terminal employs 01 to indicate that the virtual firearm is on fire, then the terminal can bind Weipon A with 113, (1, 2, 3, 4, 5, 6), Audio/Sample/UI/A, Audio/Sample/UI/B, and 01 through the first application. Therefore, in the subsequent game process, when the virtual firearm marked as the weather a is fired, that is, the state of the virtual object marked as the weather a is 01, the terminal can determine the target Audio unit through the mark 113, acquire the playing parameters (1, 2, 3, 4, 5, 6) of the target Audio unit, and load the Audio from the Audio/Sample/UI/a and the Audio/Sample/UI/B respectively for playing.
After step 403, the terminal can perform both step 404 and step 405, which is described below, and this is not limited in this embodiment of the present application.
404. The terminal generates a scene configuration file of the virtual scene based on a plurality of virtual objects in the virtual scene, playing parameters of audio units respectively bound with the virtual objects and a storage position of at least one audio in the audio units through the first application, wherein the scene configuration file is used for constructing the virtual scene.
The scene configuration file stores configuration information of a plurality of virtual objects in the virtual scene, such as positions of the virtual objects in the virtual scene and associated audio information. In some embodiments, the first application may also be capable of generating resource files of the virtual scene, the resource files including model files of virtual objects, audio files, and the like. The terminal can realize the construction of the virtual scene based on the scene configuration file and the resource file of the virtual scene.
405. In response to detecting the play test instruction of the target audio unit, the terminal loads at least one audio from a storage location of the at least one audio.
The playing test instruction is used for controlling the terminal to play the audio in the target audio unit, and at least one audio is also at least one audio in the target audio unit.
In a possible implementation manner, the audio binding interface further includes an audio unit playing control, and in response to a triggering operation on the audio playing control, the terminal loads at least one audio into the memory based on a storage location of the at least one audio.
For example, referring to fig. 8, the audio binding interface 801 further includes an audio unit playing control 805, and in response to detecting a triggering operation on the audio unit playing control 805, the terminal loads, through the first application, at least one audio to the memory based on a storage location of the at least one audio.
In one possible embodiment, the playing test instruction of the target audio unit is triggered by the target virtual object being in the target state. The target state is set by a technician according to the type of the target virtual object, and taking the target virtual object as a virtual firearm as an example, the target state can be that the virtual firearm is in a firing state. And in response to detecting that the virtual firearm is in a firing state, the terminal triggers a playing test instruction of the target audio unit. In response to detecting the playing test instruction of the target audio unit, the terminal loads the at least one audio based on the storage position of the at least one audio.
406. The terminal plays at least one audio based on the playing parameters of the target audio unit through the first application.
In a possible implementation manner, the terminal obtains, through the first application, a first duration, a second duration, a third duration, and a target volume from the playing parameters of the target audio unit, where the target volume is a maximum volume for playing at least one audio, the first duration is a duration for gradually increasing from a lowest volume to the target volume when playing the at least one audio, the second duration is a duration for gradually decreasing from the target volume to the lowest volume when playing the at least one audio, and the third duration is a duration for playing with the target volume. The terminal controls at least one audio to gradually increase from the lowest volume to the target volume within a first time period through the first application. And in response to that the time length of the at least one audio played at the target volume reaches a third time length, the terminal controls the at least one audio to gradually reduce from the target volume to the lowest volume within the second time length through the first application.
The process of gradually increasing the lowest volume to the target volume is also referred to as Fade-In (Fade In), the first time period is also referred to as Fade-In duration, correspondingly, the process of gradually decreasing the target volume to the lowest volume is also referred to as Fade-Out (Fade Out), and the second time period is also referred to as Fade-Out duration. The third time period is the time period for playing the at least one audio at the target volume.
Under the embodiment, the fade-in effect and the fade-out effect can be increased when the terminal plays the audio, so that the playing process of the audio is more smooth, and the playing effect of the audio is improved.
In a possible implementation manner, the terminal obtains, by the first application, a play weight of at least one audio from the play parameters, where the play weight is used to represent a play probability of the audio. The terminal determines a target audio from the at least one audio based on the play weight through the first application. And the terminal plays the target audio through the first application.
Under the embodiment, the terminal can determine the target audio to be played based on the play weight of at least one audio, so that the diversity of audio playing is improved.
For example, the target audio includes three audios, and the terminal obtains the playing weights of the three audios from the playing parameters through the first application, which are 0.2, 0.3, and 0.5, respectively. The terminal generates a sequence of ten numbers 0 to 9 based on three play weights 0.2, 0.3, and 0.5, where the numbers 0 and 1 represent audio with a play weight of 0.2, the numbers 2 to 4 represent audio with a play weight of 0.3, and the numbers 5 to 9 represent audio with a play weight of 0.5. The terminal randomly generates an integer with a value range of 0 to 9, if the integer is 0 or 1, the terminal plays the audio with the playing weight of 0.2, if the integer is any one of 2 to 4, the terminal plays the audio with the playing weight of 0.3, and if the integer is any one of 5 to 9, the terminal plays the audio with the playing weight of 0.5.
In a possible implementation manner, the terminal obtains, by the first application, a playing distance of the target audio unit from the playing parameters, where the playing distance is a maximum influence distance of at least one audio in the virtual scene. And responding to the fact that the distance between the target virtual object and the controlled virtual object is smaller than or equal to the playing distance, the terminal plays at least one audio through the first application, and the controlled virtual object is a virtual object controlled by the local terminal.
Under the embodiment, the terminal can play at least one audio when the distance between the controlled virtual object and the target virtual object is smaller than the playing distance, so that the simulation of the audio propagation range in the real world is realized, and the reality of the audio is improved.
For example, the terminal can obtain the playing distance 500 of the target audio unit from the playing parameters, 500 means that the local terminal controlling the controlled virtual object will play at least one audio only when the distance between the controlled virtual object and the target virtual object is less than or equal to 500. The terminal can determine a distance between the controlled virtual object and the target virtual object in real time, and in response to detecting that the distance between the controlled virtual object and the target virtual object is less than or equal to 500, the terminal plays the at least one audio. In some embodiments, the attribute of the controlled virtual object in the first application is Listener (Listener).
In a possible implementation manner, the terminal obtains, by the first application, a maximum playing time of the target audio unit from the playing parameters, where the maximum playing time is a maximum number of audio played at the same time. The terminal creates an audio pool of the audio unit through the first application, wherein the audio pool comprises a plurality of virtual players the number of which is the same as the maximum playing times. The terminal plays at least one audio based on the plurality of virtual players through the first application.
In this embodiment, the terminal can control the maximum playing times of the target audio unit by limiting the number of the virtual players, thereby avoiding confusion caused by playing too many audio at the same time and improving the playing effect of the audio.
For example, referring to fig. 11, fig. 11 is a logical diagram of creating an audio pool, where 1101 is an identifier of an audio unit, 1102 is a maximum playing time of the audio unit, 1103 is an audio pool (audioplayerpool) of the audio unit, and the audio pool 1103 includes virtual players (audioplayer) 11031 whose number is the same as the maximum playing time 1102. In some embodiments, the first application can maintain a list indexed by the identity of the audio unit for each virtual object, each audio unit identity corresponding to an audio pool in which the number of virtual players is the same as the maximum number of plays of the audio unit. For the audio unit identified as 123 in fig. 11, the maximum number of plays for the audio unit is 3, the audio pool for the audio unit includes 3 virtual players, and so on.
On the basis of the foregoing embodiment, optionally, after the terminal plays the at least one audio through the first application based on the plurality of virtual players, the method further includes: and under the condition that the plurality of virtual players play simultaneously, in response to receiving a playing instruction of any audio in the at least one audio, controlling the virtual player which plays earliest in the plurality of virtual players to stop playing the current audio. And controlling the virtual player which plays at the earliest to play any audio.
In this embodiment, when multiple virtual players play simultaneously, in response to receiving a play instruction for another audio in the target audio unit, the terminal can stop playing the current audio by the earliest virtual player, and control the earliest virtual player to play the other audio, so as to ensure that the number of simultaneously played audio does not exceed the maximum play time, avoid confusion caused by simultaneous playing of a large number of audio, and improve the audio playing effect.
For example, the above embodiment may be referred to as "using the oldest (steeloldest)", "referring to fig. 9, or taking an audio unit identified as 123 as an example, when 3 virtual players in an audio pool of the audio unit play simultaneously, in response to a play instruction for any audio in the audio unit, the terminal can control the virtual player with the longest play time in the 3 virtual players to stop playing the current audio, and play the any audio using the virtual player.
Of course, the above example is described by taking an example that the terminal controls the virtual player to play by using the "most old (steadoldest)" as an example, in other possible embodiments, the terminal can also control the virtual player to play by using other manners, for example, in the case that a plurality of virtual players play simultaneously, the terminal can randomly control any virtual player of the plurality of virtual players to stop playing the current audio and control the virtual player to play another audio, which is not limited in this embodiment of the present application.
In a possible embodiment, the plurality of audio units belong to at least two audio unit groups, and the terminal obtains, through the first application, unit group volume of the audio unit group to which the target audio unit belongs and branch volume of the target audio unit in the audio unit group from the playing parameters. And the terminal determines the playing volume of at least one audio through the first application based on the unit group volume and the branch volume. The terminal plays at least one audio with a play volume through the first application.
In this embodiment, when one audio group includes a plurality of audio units, a technician can adjust the volume of the plurality of audio units in the audio group by adjusting the volume of the unit group of the audio group, and the efficiency of human-computer interaction is high.
For example, the terminal can obtain, through the first application, that the unit volume of the audio unit group to which the target audio unit belongs is 0.7, the branch volume of the target audio unit is 0.5, and the terminal can multiply the unit volume 0.7 by the branch volume 0.5 to obtain the play volume 0.35 of at least one audio in the unit of the target audio, and the terminal can play the at least one audio with the volume of 0.35.
In a possible implementation manner, the terminal obtains, by the first application, a pre-audio of the audio unit from the audio configuration information, where the pre-audio is an audio played before playing at least one audio. The terminal plays the preposed audio through the first application. And responding to the end of the front audio playing, and the terminal plays at least one audio through the first application.
In this embodiment, when the terminal plays the audio in the unit of the target audio through the first application, the terminal can play the pre-audio first, and gradually introduce the audio in the unit of the target audio through the pre-audio, so that the transition of the audio is more gradual.
To more clearly illustrate the technical solution provided by the embodiment of the present application, a logical structure related to playing in the first application is described below, referring to fig. 12, where the logical structure includes a virtual object pronunciation control interface 1201 and an audio packaging interface 1202, the virtual object pronunciation control interface 1201 includes a plurality of playing interfaces 12011 and a plurality of playing control interfaces 12012, the virtual object pronunciation control interface 1201 is associated with a virtual object 12013 in a virtual scene, and the playing control interface 12012 can control audio played by the playing interface 12011. The audio packaging interface 1202 is used to load multiple audios 12022 through the audio resource loading interface 12021. In some embodiments, the virtual object articulation control interface 1201 is also referred to as an ASPlaybackController, the audio packaging interface 1202 is also referred to as a soundsorcepck, the audio interface is referred to as an AudioActor, the virtual object is referred to as an AudioGameObject, the playback interface 12011 is referred to as an IActorPlayer, one IActorPlayer is used to play one audio in the audio interface, the playback control interface 12012 is referred to as an IAudioGameObject, the IAudioGameObject is capable of controlling the playback, pause, stop, resume, and volume setting of the audio, and the audio resource loading interface 12021 is also referred to as an AudioEventMgr.
Referring to fig. 13, the terminal plays audio through the first application, the play control interface 12012(IAudioGameObject) can manage all audio interfaces associated with the virtual object, the virtual object articulation control interface 1201(ASPlaybackController) can manage one audio interface, the play interface 12011(IActorPlayer) can manage one audio interface, and the audio packing interface 1202 (soundsorceppack) is a bottom layer interface of the first application and can call the audio in the memory. Among the multiple interfaces, the playing interface 12011 is a core unit for playing audio, the playing interface 12011 can control the playing of an audio, the playing flow of the playing interface 12011 refers to fig. 14, before the playing interface 12011 plays the audio, the terminal can initialize the playing interface 12011 through a first application, and after the initialization, the playing interface 12011 waits for the call of the first application. In response to the terminal calling the playing interface 12011 through the first application, the playing interface 12011 loads the audio in the target audio unit through the audio packaging interface 1202, and obtains the playing parameters of the target audio unit. In some embodiments, the fade-in and fade-out of the audio are set in the playing parameters of the target audio unit, then the audio playing interface 12011 can perform a fade-in operation on the audio in the target audio unit, and after the fade-in operation, the audio playing interface 12011 plays the audio of the target audio unit, then performs a fade-out operation, and finally stops playing. In some embodiments, in response to the target audio stopping playing, the audio playing interface 12011 can delete the audio in the target audio unit from the memory to save the memory.
Optionally, the audio playing interface 12011 is an interface generated by the virtual object pronunciation control interface 1201, and the generation of the audio playing interface 12011 depends on the call of the first application to the virtual object pronunciation control interface 1201, and after introducing the playing flow of the audio playing interface 12011, the calling mode of the virtual object pronunciation control interface 1201 is introduced below.
Referring to fig. 15, in response to a call of the virtual object pronunciation control interface 1201 by the first application, the state of the virtual object pronunciation control interface 1201 is acquired, and the state of the virtual object pronunciation control interface 1201 includes a play state, a pause state, and a stop play state. In response to the virtual object pronunciation control interface 1201 being in the playing state, the terminal controls the virtual object pronunciation control interface 1201 to execute a function corresponding to the target state, such as generating the audio playing interface 12011, through the first application. When the virtual object pronunciation control interface 1201 enters another state, such as a playing stop state, the terminal controls the virtual object pronunciation control interface 1201 to execute a function corresponding to the other state through the first application. When the virtual object pronunciation control interface 1201 does not enter another state, no processing is performed on the virtual object pronunciation control interface 1201.
When the virtual object pronunciation control interface 1201 is in the target state, the following process is also included:
referring to fig. 16, the terminal performs setting initialization through the first application, the virtual object pronunciation control interface 1201. The virtual object pronunciation control interface 1201 recalls the initialized settings, optionally, the set callback is performed by a technician according to the actual situation. The virtual object pronunciation control interface 1201 calls a random generator, determines a target audio to be played at this time based on the playing weight of at least one audio in the target audio unit, and the virtual object pronunciation control interface 1201 acquires the storage position of the target audio. The virtual object pronunciation control interface 1201 sets the state of the virtual object pronunciation control interface 1201 to be a playing state, and the virtual object pronunciation control interface 1201 calls the audio packaging interface 1202 to load the target audio to the memory based on the storage position of the target audio.
The following describes steps after the target audio is loaded into the memory.
Referring to fig. 17, after the target audio is loaded, the virtual object pronunciation control interface 1201 determines whether the target audio is loaded successfully, and when it is determined that the target audio is loaded successfully, the virtual object pronunciation control interface 1201 acquires audio data of the target audio and performs a preparation for playing (Prepare) stage. In some embodiments, in the preparation stage of playing, the virtual object articulation control interface 1201 obtains a playing parameter of the target audio, obtains a unit group volume of an audio unit group to which a unit of the target audio belongs and a volume of the target audio from the playing parameter, and determines a playing volume of the target audio based on the unit group volume of the audio unit group to which the unit of the target audio belongs and the volume of the target audio. The virtual object pronunciation control interface 1201 obtains the playing distance of the target audio unit from the playing parameters, and configures the playing distance to the target virtual object. The virtual object pronunciation control interface 1201 determines the play duration of the target audio. The virtual object pronunciation control interface 1201 calls the audio play interface 12011. If the play parameter has a fade-in duration, the virtual object pronunciation control interface 1201 can perform fade-in processing on the target audio, and then play the target audio.
The following describes a flow of playing audio by the virtual object pronunciation control interface 1201 based on the audio playing interface 12011.
In one possible implementation, referring to fig. 18, in response to a call instruction, the virtual object sonification control interface 1201 validates the target audio unit based on its playback parameters. If the playing parameters include a blank weight and the audio played this time is blank, the process is ended, and the audio playing interface 12011 does not play the audio in the target audio unit, where the blank weight is used to refer to the probability of playing the blank audio this time. If the audio played this time is not blank audio, the virtual object pronunciation control interface 1201 determines whether the target audio is a cyclic audio or a 3D audio, where the 3D audio is an audio played according to an angle between the controlled virtual object and the virtual object. If the target audio is not the cyclic audio or the 3D audio, the virtual object pronunciation control interface 1201 determines whether the distance between the controlled virtual object and the target virtual object exceeds the playing distance of the target audio, and generates the audio playing interface 12011 in response to the distance between the controlled virtual object and the target virtual object not exceeding the playing distance of the target audio. If the audio playback interface 12011 is a blank interface, the process ends. If an audio playback interface 12011 is generated. If the audio playing interface 12011 is not a blank interface, the related process shown in fig. 12 is executed, and will not be described herein again.
The above steps 401-406 are further described based on the above possible embodiments and fig. 19.
Referring to fig. 19, the terminal stores a plurality of audios to a target directory, optionally, the target directory is a directory for storing audios in the first application. And starting the second application, creating an audio unit (AudioActor) through the second application, setting playing parameters for the audio unit, and enabling a technician to listen to the set audio unit through the second application. The terminal generates a configuration file through the second application, loads the configuration file through the first application, and binds the audio unit with the virtual object in the virtual scene through the first application.
All the above optional technical solutions may be combined arbitrarily to form optional embodiments of the present application, and are not described herein again.
According to the technical scheme, the terminal can load the audio configuration information generated by the second application through the first application, the target audio unit and the target virtual object are bound based on the audio configuration information, the audio configuration information comprises the playing parameters and the audio storage position of the audio unit, complex and tedious resource loading and parameter setting work is not needed, the audio resource can be loaded and the playing parameters can be acquired by directly loading the audio configuration information through the first application, and the efficiency of configuring audio for the virtual object is improved.
Fig. 20 is a schematic structural diagram of an audio binding apparatus provided in an embodiment of the present application, and referring to fig. 20, the apparatus includes: a loading module 2001, a first obtaining module 2002, and a binding module 2003.
A loading module 2001, configured to load, by the first application, audio configuration information generated by the second application, where the audio configuration information includes playback parameters of a plurality of audio units and a storage location of at least one audio in each audio unit.
A first obtaining module 2002, configured to obtain a target audio unit selected from the audio configuration information and a target virtual object selected from the virtual scene.
And a binding module 2003, configured to bind the playing parameter of the target audio unit, the storage location of at least one audio in the target audio unit, and the target virtual object.
In a possible implementation manner, the loading module is configured to display, by the first application, a configuration information loading interface of the virtual scene, where the configuration information loading interface includes at least one information identifier, and each information identifier refers to one piece of audio configuration information generated by the second application. And in response to the detection of the selection operation of any information identifier, loading the audio configuration information referred by any information identifier.
In one possible embodiment, the apparatus further comprises:
and the switching module is used for switching the configuration information loading interface into an audio binding interface through the first application, and the audio binding interface comprises an audio unit selection control and a virtual object selection control.
The loading module is further configured to display an audio unit selection interface in response to detecting a trigger operation on the audio unit selection control, where audio identifiers of a plurality of audio units are displayed in the audio unit selection interface, and each audio identifier is used to refer to one of the plurality of audio units. And in response to the detection of the selection operation on any audio identifier, acquiring the target audio unit referred by any audio identifier. In response to detecting the triggering operation of the virtual object selection control, displaying a virtual object selection interface, wherein object identifications of a plurality of virtual objects in a virtual scene are displayed in the virtual object selection interface, and each object identification is used for referring to one virtual object in the virtual scene. And in response to the detection of the selection operation on any object identifier, acquiring a target virtual object corresponding to any object identifier.
The loading module is further used for binding the playing parameters of the target audio unit and the storage position of at least one audio in the target audio unit with the target virtual object in response to the detection of the triggering operation of the audio binding control.
In one possible implementation, the audio configuration information generating device includes:
and the second acquisition module is used for acquiring the playing parameters of the plurality of audio units input in the audio editing interface and the storage position of at least one audio in each audio unit through the audio editing interface of the second application.
The first generation module is used for responding to the configuration information generation operation on the audio editing interface and generating audio configuration information based on the playing parameters of the audio units and the storage position of at least one audio in each audio unit.
In one possible embodiment, the apparatus further comprises:
and the dragging module is used for responding to the dragging operation of any audio unit in the first audio unit group and transferring any audio unit from the first audio unit group to a second audio unit group corresponding to the end position of the dragging operation.
In one possible embodiment, the apparatus further comprises:
the second generating module is configured to generate a scene configuration file of the virtual scene based on the plurality of virtual objects in the virtual scene, the playing parameters of the audio unit respectively bound to the plurality of virtual objects, and a storage location of at least one audio in the audio unit, where the scene configuration file is used to construct the virtual scene.
In one possible embodiment, the apparatus further comprises:
and the first playing module is used for responding to the detected playing test instruction of the target audio unit and loading at least one audio from the storage position of the at least one audio. Playing at least one audio based on the playing parameters of the target audio unit.
In one possible embodiment, the apparatus further comprises:
the second playing module is used for acquiring a first time length, a second time length, a third time length and a target volume from the playing parameters, wherein the target volume is the maximum volume for playing at least one audio, the first time length is the time length from the lowest volume to the target volume when the at least one audio is played, the second time length is the time length from the target volume to the lowest volume when the at least one audio is played, and the third time length is the time length for playing with the target volume. And controlling at least one audio to gradually increase from the lowest volume to the target volume within the first time period. And in response to the fact that the time length that the at least one audio is played at the target volume reaches the third time length, controlling the at least one audio to gradually reduce from the target volume to the lowest volume within the second time length.
In one possible embodiment, the apparatus further comprises:
and the third playing module is used for acquiring the playing weight of at least one audio from the playing parameters, and the playing weight is used for representing the playing probability of the audio. Based on the playback weight, a target audio is determined from the at least one audio. And playing the target audio.
In one possible embodiment, the apparatus further comprises:
and the fourth playing module is used for acquiring the playing distance of the target audio unit from the playing parameters, wherein the playing distance is the maximum influence distance of at least one audio in the virtual scene. And responding to the fact that the distance between the target virtual object and the controlled virtual object is smaller than or equal to the playing distance, and playing at least one audio, wherein the controlled virtual object is a virtual object controlled by the local terminal.
In one possible embodiment, the apparatus further comprises:
and the fifth playing module is used for acquiring the maximum playing times of the target audio unit from the playing parameters, wherein the maximum playing times are the maximum number of the audio played at the same time. An audio pool of audio units is created, the audio pool including a number of virtual players equal to the maximum number of plays. At least one audio is played based on the plurality of virtual players.
In a possible implementation manner, the fifth playing module is further configured to, in a case where the plurality of virtual players play simultaneously, in response to receiving a playing instruction for any one of the at least one audio, control a virtual player playing earliest in the plurality of virtual players to stop playing a current audio. And controlling the virtual player which plays at the earliest to play any audio.
In one possible embodiment, the plurality of audio units belongs to at least two audio unit groups, the apparatus further comprising:
and the sixth playing module is used for acquiring the unit group volume of the audio unit group to which the target audio unit belongs and the branch volume of the target audio unit in the audio unit group from the playing parameters. And determining the playing volume of at least one audio based on the unit group volume and the branch volume. At least one audio is played at a playback volume.
In one possible embodiment, the apparatus further comprises:
and the seventh playing module is used for acquiring the preposed audio of the audio unit from the audio configuration information, wherein the preposed audio is the audio played before at least one audio is played. And playing the prepositive audio. And responding to the end of the preposed audio playing, and playing at least one audio.
According to the technical scheme, the terminal can load the audio configuration information generated by the second application through the first application, the target audio unit and the target virtual object are bound based on the audio configuration information, the audio configuration information comprises the playing parameters and the audio storage position of the audio unit, complex and tedious resource loading and parameter setting work is not needed, the audio resource can be loaded and the playing parameters can be acquired by directly loading the audio configuration information through the first application, and the efficiency of configuring audio for the virtual object is improved.
An embodiment of the present application provides a computer device, configured to perform the foregoing method, where the computer device may be implemented as a terminal, and a structure of the terminal is described below:
fig. 21 is a schematic structural diagram of a terminal according to an embodiment of the present application. The terminal 2100 may be: a smartphone, a tablet, a laptop, or a desktop computer. Terminal 2100 may also be referred to as a user equipment, portable terminal, laptop terminal, desktop terminal, or other name.
In general, the terminal 2100 includes: one or more processors 2101 and one or more memories 2102.
The processor 2101 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 2101 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 2101 may also include a main processor and a coprocessor, the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 2101 may be integrated with a GPU (Graphics Processing Unit) which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 2101 may also include an AI (Artificial Intelligence) processor to process computational operations related to machine learning.
The memory 2102 may include one or more computer-readable storage media, which may be non-transitory. The memory 2102 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 2102 is used to store at least one computer program for execution by the processor 2101 to implement the audio binding methods provided by the method embodiments herein.
In some embodiments, the terminal 2100 may further optionally include: a peripheral interface 2103 and at least one peripheral. The processor 2101, memory 2102 and peripheral interface 2103 may be connected by buses or signal lines. Each peripheral may be connected to peripheral interface 2103 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 2104, display screen 2105, camera head assembly 2106, audio circuitry 2107, positioning assembly 2108, and power source 2109.
The peripheral interface 2103 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 2101 and the memory 2102. In some embodiments, the processor 2101, memory 2102 and peripheral interface 2103 are integrated on the same chip or circuit board; in some other embodiments, any one or both of the processor 2101, the memory 2102 and the peripheral interface 2103 may be implemented on separate chips or circuit boards, which is not limited by this embodiment.
The Radio Frequency circuit 2104 is used to receive and transmit RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 2104 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 2104 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuitry 2104 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth.
The display screen 2105 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 2105 is a touch display screen, the display screen 2105 also has the ability to capture touch signals on or over the surface of the display screen 2105. The touch signal may be input as a control signal to the processor 2101 for processing. At this point, the display 2105 may also be used to provide virtual buttons and/or virtual keyboards, also known as soft buttons and/or soft keyboards.
The camera assembly 2106 is used to capture images or video. Optionally, camera head assembly 2106 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal.
The audio circuitry 2107 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals into the processor 2101 for processing, or inputting the electric signals into the radio frequency circuit 2104 to realize voice communication.
The positioning component 2108 is used to locate the current geographic position of the terminal 2100 for navigation or LBS (Location Based Service).
Power supply 2109 is used to provide power to various components in terminal 2100. The power source 2109 may be alternating current, direct current, disposable batteries, or rechargeable batteries.
In some embodiments, the terminal 2100 also includes one or more sensors 2110. The one or more sensors 2110 include, but are not limited to: acceleration sensor 2111, gyro sensor 2112, pressure sensor 2113, fingerprint sensor 2114, optical sensor 2115, and proximity sensor 2116.
The acceleration sensor 2111 can detect the magnitude of acceleration in three coordinate axes of the coordinate system established with the terminal 2100.
The gyro sensor 2112 can be used for acquiring the body direction and the rotation angle of the terminal 2100, and the gyro sensor 2112 and the acceleration sensor 2111 can cooperate to acquire the 3D action of the user on the terminal 2100.
Pressure sensors 2113 may be provided on the side frames of terminal 2100 and/or underneath display screen 2105. When the pressure sensor 2113 is disposed at the side frame of the terminal 2100, a user's grip signal on the terminal 2100 can be detected, and the processor 2101 performs left-right hand recognition or shortcut operation according to the grip signal collected by the pressure sensor 2113. When the pressure sensor 2113 is arranged at the lower layer of the display screen 2105, the processor 2101 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 2105.
The fingerprint sensor 2114 is configured to collect a fingerprint of a user, and the processor 2101 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 2114, or the fingerprint sensor 2114 identifies the identity of the user according to the collected fingerprint.
The optical sensor 2115 is used to collect the ambient light intensity. In one embodiment, the processor 2101 may control the display brightness of the display screen 2105 based on the ambient light intensity collected by the optical sensor 2115.
The proximity sensor 2116 is used to collect the distance between the user and the front face of the terminal 2100.
Those skilled in the art will appreciate that the configuration shown in fig. 21 is not intended to be limiting with respect to terminal 2100, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
In an exemplary embodiment, a computer-readable storage medium, such as a memory including a computer program, executable by a processor to perform the audio binding method in the above embodiments is also provided. For example, the computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product or a computer program is also provided, which includes program code stored in a computer-readable storage medium, which is read by a processor of a computer device from the computer-readable storage medium, and which is executed by the processor to cause the computer device to execute the above-mentioned audio binding method.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (15)

1. A method for audio binding, the method comprising:
loading audio configuration information generated by a second application through a first application, wherein the audio configuration information comprises playing parameters of a plurality of audio units and a storage position of at least one audio in each audio unit;
acquiring a target audio unit selected from the audio configuration information and a target virtual object selected from a virtual scene;
and binding the playing parameters of the target audio unit, the storage position of at least one audio in the target audio unit and the target virtual object.
2. The method of claim 1, wherein loading, by the first application, the audio configuration information generated by the second application comprises:
displaying a configuration information loading interface of the virtual scene through the first application, wherein the configuration information loading interface comprises at least one information identifier, and each information identifier refers to audio configuration information generated by the second application;
and in response to the detection of the selection operation of any information identifier, loading the audio configuration information referred by the any information identifier.
3. The method of claim 2, wherein after the loading of the audio configuration information corresponding to any of the information identifiers, the method further comprises:
switching the configuration information loading interface into an audio binding interface through the first application, wherein the audio binding interface comprises an audio unit selection control and a virtual object selection control;
the obtaining a target audio unit selected from the audio configuration information and a target virtual object selected from a virtual scene comprises:
in response to detecting a trigger operation on the audio unit selection control, displaying an audio unit selection interface, wherein audio identifiers of the multiple audio units are displayed in the audio unit selection interface, and each audio identifier is used for referring to one of the multiple audio units;
in response to detecting any audio identifier selection operation, acquiring the target audio unit referred by any audio identifier;
in response to detecting a triggering operation on the virtual object selection control, displaying a virtual object selection interface, where object identifiers of multiple virtual objects in the virtual scene are displayed in the virtual object selection interface, and each object identifier is used to refer to one virtual object in the virtual scene;
and in response to the detection of the selection operation of any object identifier, acquiring the target virtual object corresponding to the any object identifier.
4. The method of claim 3, further comprising an audio binding control on the audio binding interface, wherein binding the playing parameters of the target audio unit, the storage location of the at least one audio in the target audio unit, and the target virtual object comprises:
and in response to the detection of the triggering operation of the audio binding control, binding the playing parameters of the target audio unit and the storage position of at least one audio in the target audio unit with the target virtual object.
5. The method of claim 1, wherein the audio configuration information is generated by a method comprising:
acquiring, through an audio editing interface of the second application, play parameters of the plurality of audio units and a storage location of at least one audio in each of the audio units, which are input in the audio editing interface;
and responding to a configuration information generation operation on the audio editing interface, and generating the audio configuration information based on the playing parameters of the audio units and the storage position of at least one audio in each audio unit.
6. The method of claim 5, wherein the audio editing interface displays a group of audio units to which the plurality of audio units belong, and before generating the audio configuration information based on the playback parameters of the plurality of audio units and the storage location of at least one audio in each of the audio units in response to the configuration information generating operation on the audio editing interface, the method further comprises:
in response to a dragging operation on any audio unit in a first audio unit group, any audio unit is transferred from the first audio unit group to a second audio unit group corresponding to the end position of the dragging operation.
7. The method of claim 1, wherein after binding the playback parameters of the target audio unit and the storage location of the at least one audio in the target audio unit to the target virtual object, the method further comprises:
generating a scene configuration file of the virtual scene based on a plurality of virtual objects in the virtual scene, playing parameters of an audio unit respectively bound with the virtual objects and a storage position of at least one audio in the audio unit, wherein the scene configuration file is used for constructing the virtual scene.
8. The method of claim 1, wherein after binding the playback parameters of the target audio unit and the storage location of the at least one audio in the target audio unit to the target virtual object, the method further comprises:
in response to detecting a play test instruction for the target audio unit, loading the at least one audio from a storage location of the at least one audio;
playing the at least one audio based on the playing parameters of the target audio unit.
9. The method of claim 8, wherein the playing the at least one audio based on the playing parameters of the target audio unit comprises:
acquiring a first time length, a second time length, a third time length and a target volume from the playing parameters, wherein the target volume is the maximum volume for playing the at least one audio, the first time length is the time length from the lowest volume to the target volume when the at least one audio is played, the second time length is the time length from the target volume to the lowest volume when the at least one audio is played, and the third time length is the time length for playing with the target volume;
controlling the at least one audio to gradually increase from the lowest volume to the target volume within the first duration;
and in response to the time length that the at least one audio is played at the target volume reaching the third time length, controlling the at least one audio to gradually decrease from the target volume to the lowest volume within the second time length.
10. The method of claim 8, wherein the playing the at least one audio based on the playing parameters of the target audio unit comprises:
acquiring a playing weight of the at least one audio from the playing parameters, wherein the playing weight is used for representing the playing probability of the audio;
determining a target audio from the at least one audio based on the playback weight;
and playing the target audio.
11. The method of claim 8, wherein the playing the at least one audio based on the playing parameters of the target audio unit comprises:
acquiring a playing distance of the target audio unit from the playing parameters, wherein the playing distance is the maximum influence distance of the at least one audio in the virtual scene;
and responding to the fact that the distance between the target virtual object and a controlled virtual object is smaller than or equal to the playing distance, and playing the at least one audio, wherein the controlled virtual object is a virtual object controlled by a local terminal.
12. The method of claim 8, wherein the playing the at least one audio based on the playing parameters of the target audio unit comprises:
acquiring the maximum playing times of the target audio unit from the playing parameters, wherein the maximum playing times are the maximum number of audio played at the same time;
creating an audio pool of the audio unit, wherein the audio pool comprises a plurality of virtual players with the same number as the maximum playing times;
playing the at least one audio based on the plurality of virtual players.
13. An audio binding apparatus, the apparatus comprising:
the loading module is used for loading audio configuration information generated by a second application through a first application, wherein the audio configuration information comprises the playing parameters of a plurality of audio units and the storage position of at least one audio in each audio unit;
a first obtaining module, configured to obtain a target audio unit selected from the audio configuration information and a target virtual object selected from a virtual scene;
and the binding module is used for binding the playing parameters of the target audio unit, the storage position of at least one audio frequency in the target audio unit and the target virtual object.
14. A computer device, characterized in that the computer device comprises one or more processors and one or more memories in which at least one computer program is stored, the computer program being loaded and executed by the one or more processors to implement the audio binding method of any one of claims 1 to 12.
15. A computer-readable storage medium, in which at least one computer program is stored, which is loaded and executed by a processor to implement the audio binding method of any one of claims 1 to 12.
CN202110121424.3A 2021-01-28 2021-01-28 Audio binding method, device, equipment and storage medium Active CN112717395B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110121424.3A CN112717395B (en) 2021-01-28 2021-01-28 Audio binding method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110121424.3A CN112717395B (en) 2021-01-28 2021-01-28 Audio binding method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112717395A true CN112717395A (en) 2021-04-30
CN112717395B CN112717395B (en) 2023-03-03

Family

ID=75594478

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110121424.3A Active CN112717395B (en) 2021-01-28 2021-01-28 Audio binding method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112717395B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105955700A (en) * 2016-06-16 2016-09-21 广东欧珀移动通信有限公司 Sound effect adjusting method and user terminal
CN109144610A (en) * 2018-08-31 2019-01-04 腾讯科技(深圳)有限公司 Audio frequency playing method, device, electronic device and computer readable storage medium
US20190018643A1 (en) * 2016-06-16 2019-01-17 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Sound effect configuration method and related device
CN112221137A (en) * 2020-10-26 2021-01-15 腾讯科技(深圳)有限公司 Audio processing method and device, electronic equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105955700A (en) * 2016-06-16 2016-09-21 广东欧珀移动通信有限公司 Sound effect adjusting method and user terminal
US20190018643A1 (en) * 2016-06-16 2019-01-17 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Sound effect configuration method and related device
CN109144610A (en) * 2018-08-31 2019-01-04 腾讯科技(深圳)有限公司 Audio frequency playing method, device, electronic device and computer readable storage medium
CN112221137A (en) * 2020-10-26 2021-01-15 腾讯科技(深圳)有限公司 Audio processing method and device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KUANGBEN2000: ""游戏音频中间件:整合 Unity 和 Wwise"", 《CSDN博客》 *
知乎用户等: ""使用Unity5开发手游,使用FMOD或wwise音频插件而不是自带的声音系统有哪些好处?"", 《知乎》 *

Also Published As

Publication number Publication date
CN112717395B (en) 2023-03-03

Similar Documents

Publication Publication Date Title
CN108597530B (en) Sound reproducing method and apparatus, storage medium and electronic apparatus
CN109091869B (en) Method and device for controlling action of virtual object, computer equipment and storage medium
CN111275797B (en) Animation display method, device, equipment and storage medium
CN112156464B (en) Two-dimensional image display method, device and equipment of virtual object and storage medium
CN108694073B (en) Control method, device and equipment of virtual scene and storage medium
CN111603771B (en) Animation generation method, device, equipment and medium
CN111714886B (en) Virtual object control method, device, equipment and storage medium
CN111026318B (en) Animation playing method, device and equipment based on virtual environment and storage medium
CN111013142A (en) Interactive effect display method and device, computer equipment and storage medium
Montero et al. Designing and implementing interactive and realistic augmented reality experiences
CN110860087B (en) Virtual object control method, device and storage medium
JP2023524368A (en) ADAPTIVE DISPLAY METHOD AND DEVICE FOR VIRTUAL SCENE, ELECTRONIC DEVICE, AND COMPUTER PROGRAM
JP2024507595A (en) Virtual resource input control method, device, computer equipment, and storage medium
CN110833695B (en) Service processing method, device, equipment and storage medium based on virtual scene
CN113457173B (en) Remote teaching method, remote teaching device, computer equipment and storage medium
KR20230042517A (en) Contact information display method, apparatus and electronic device, computer-readable storage medium, and computer program product
CN112206519B (en) Method, device, storage medium and computer equipment for realizing game scene environment change
CN112950753B (en) Virtual plant display method, device, equipment and storage medium
CN112717395B (en) Audio binding method, device, equipment and storage medium
CN115861577A (en) Method, device and equipment for editing posture of virtual field scene and storage medium
CN113194329B (en) Live interaction method, device, terminal and storage medium
CN114177614A (en) Interface display method, device, terminal and storage medium
CN114130020A (en) Virtual scene display method, device, terminal and storage medium
CN111672121A (en) Virtual object display method and device, computer equipment and storage medium
CN113713371B (en) Music synthesis method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40042504

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant