WO2019153840A1 - 声音再现方法和装置、存储介质及电子装置 - Google Patents

声音再现方法和装置、存储介质及电子装置 Download PDF

Info

Publication number
WO2019153840A1
WO2019153840A1 PCT/CN2018/117149 CN2018117149W WO2019153840A1 WO 2019153840 A1 WO2019153840 A1 WO 2019153840A1 CN 2018117149 W CN2018117149 W CN 2018117149W WO 2019153840 A1 WO2019153840 A1 WO 2019153840A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound
sound source
target
virtual object
virtual
Prior art date
Application number
PCT/CN2018/117149
Other languages
English (en)
French (fr)
Inventor
汪俊明
仇蒙
潘佳绮
张雅
肖庆华
张书婷
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to EP18905673.2A priority Critical patent/EP3750608A4/en
Publication of WO2019153840A1 publication Critical patent/WO2019153840A1/zh
Priority to US16/892,054 priority patent/US11259136B2/en

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/003Changing voice quality, e.g. pitch or formants
    • G10L21/007Changing voice quality, e.g. pitch or formants characterised by the process used
    • G10L21/013Adapting to target pitch
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/54Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0272Voice signal separating
    • G10L21/028Voice signal separating using properties of sound source
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/302Electronic adaptation of stereophonic sound system to listener position or orientation
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/6063Methods for processing data by generating or executing the game program for sound processing
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/003Changing voice quality, e.g. pitch or formants
    • G10L21/007Changing voice quality, e.g. pitch or formants characterised by the process used
    • G10L21/013Adapting to target pitch
    • G10L2021/0135Voice conversion or morphing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/11Positioning of individual sound objects, e.g. moving airplane, within a sound field

Definitions

  • the sound reproducing method provided by the related art has a problem that the sound reproduction accuracy is low.
  • the embodiment of the present application provides a sound reproduction method and apparatus, a storage medium, and an electronic apparatus to solve at least the technical problem that the sound reproduction method provided by the related art has low sound reproduction accuracy.
  • a sound reproduction method including: detecting, by a terminal, a sound triggering event in a sound source detection range corresponding to a first virtual object in a virtual scene, wherein the sound triggering event carries The sound source characteristic information matched with the sound source that triggers the sound; in the case that the sound triggering event is detected, the terminal determines the sound source position where the sound source is located according to the sound source characteristic information, and acquires the sound source position and the first virtual object a first transmission distance between the first positions; the terminal determines a target sound to be reproduced by the sound source at the first position according to the first transmission distance; and the terminal reproduces the target sound in the first position in the virtual scene.
  • a sound reproducing apparatus which is applied to a terminal, comprising: a detecting unit configured to detect a sound triggering event within a sound source detecting range corresponding to the first virtual object in the virtual scene
  • the sound triggering event carries the sound source feature information for matching the sound source of the triggering sound;
  • the acquiring unit is configured to determine, according to the sound source feature information, the sound source location where the sound source is located, in the case that the sound triggering event is detected And acquiring a first transmission distance between the sound source position and the first position where the first virtual object is located;
  • the determining unit is configured to determine, according to the first transmission distance, a target sound to be reproduced by the sound source at the first position And a reproduction unit configured to reproduce the target sound in the first position in the virtual scene.
  • a storage medium having stored therein a computer program, wherein the computer program is configured to execute the above method at runtime.
  • an electronic device including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the above by using a computer program method.
  • the terminal determines the sound source position by using the sound source feature information in the sound trigger event, and acquires the sound source position and the first virtual object. a first transmission distance between the first locations, the terminal determines a target sound to be reproduced according to the first transmission distance, and detects a sound triggering event in the audio detection range by the method of reproducing the target sound, and the terminal detects the sound triggering
  • the sound source is accurately determined according to the sound source characteristic information in the detected sound triggering event
  • the target sound to be reproduced in the first position in the virtual scene is accurately obtained according to the positional relationship between the sound source and the first virtual object. It is no longer limited to obtaining the sound to be reproduced by a single means of recording reproduction, thereby realizing the technical effect of improving the accuracy of reproducing the sound in the virtual scene, thereby solving the sound reproduction method provided by the related art. Low technical issues.
  • FIG. 1 is a schematic diagram of an application environment of a sound reproduction method according to an embodiment of the present application
  • FIG. 2 is a schematic flow chart of an alternative sound reproduction method according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of still another alternative sound reproduction method according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of still another alternative sound reproduction method according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram of still another alternative sound reproduction method according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of still another alternative sound reproduction method according to an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of an optional sound reproducing apparatus according to an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of an optional electronic device according to an embodiment of the present application.
  • a sound reproducing method which may alternatively, but not limited to, be applied to an application environment as shown in FIG.
  • a sound triggering event is detected within the sound source detection range of the first virtual object.
  • the position of the sound source A, the sound source B, and the sound source C is determined according to the sound source characteristic information carried in the sound triggering event.
  • the distance c of the virtual object A corresponding target sound that needs to be reproduced at the first position is determined according to the distance a, the distance b, and the distance c, and the target sound is reproduced at the first position where the first virtual object is located.
  • the foregoing terminal may include, but is not limited to, at least one of the following: a mobile phone, a tablet computer, a notebook computer, and the like.
  • the terminal detects a sound triggering event in a sound source detection range corresponding to the first virtual object in the virtual scene, where the sound triggering event carries the sound source characteristic information for matching the sound source of the triggering sound;
  • the terminal determines a sound source location where the sound source is located according to the sound source characteristic information, and acquires a first transmission distance between the sound source location and the first location where the first virtual object is located;
  • the terminal determines, according to the first transmission distance, a target sound to be reproduced by the sound source at the first position;
  • the terminal reproduces the target sound in the first position in the virtual scene.
  • the above-described sound reproduction method may be, but is not limited to, applied to a process of performing sound reproduction in a virtual scene, such as a virtual scene displayed in a game application.
  • the game application may include, but is not limited to, a Multiplayer Online Battle Arena (MOBA) or a Single-Player Game (SPG for short). This is not specifically limited.
  • the above game application may include, but is not limited to, at least one of the following: Three Dimension (3D) game application, Virtual Reality (VR) game application, Augmented Reality (AR) game application, mixed reality (Mixed Reality, referred to as MR) game application.
  • the virtual scene may be, but is not limited to, an interactive scene configured in a game application.
  • a virtual scene configured by a racing game includes a track and an end point
  • the virtual scene configured by the shooting game includes a target target, wherein the target target may be
  • the virtual objects (also called virtual characters) controlled by other online players participating in MOBA can also be non-player characters (Non-Player Characters, NPC for short) or machine characters in human-computer interaction.
  • the virtual scene may also include, but is not limited to, other objects for propelling the plot, such as a house, a vehicle, or a weather, a natural landscape, and the like that simulate a real environment. The above is only an example, and the embodiment does not limit this.
  • the game application is taken as an example, and it is assumed that the client of the current game application controls the first virtual object (such as the virtual character S1), and the virtual scene of the game application is a shooting scene, wherein the virtual scene includes different Virtual object.
  • the terminal detects a sound trigger event carrying the sound source feature information for matching the sound source of the trigger sound in the sound source detection range corresponding to the virtual character S1.
  • the terminal determines the sound source according to the sound source feature information.
  • a sound source position (such as the sound source A), and obtaining a first transmission distance between the sound source position and the first position where the virtual character S1 is located, thereby accurately determining that the sound source is in the first position according to the first transmission distance
  • the target sound is reproduced to achieve the purpose of accurately reproducing the determined target sound in the first position in the virtual scene.
  • the terminal detects the sound triggering event in the audio detection range by the method for reproducing the target sound, and the terminal triggers the event according to the detected sound when the sound triggering event is detected.
  • the sound source feature information accurately determines the sound source, and accurately acquires the target sound to be reproduced in the first position in the virtual scene according to the positional relationship between the sound source and the first virtual object, and is no longer limited to a single means by recording and reproducing. Acquiring the sound to be reproduced, thereby achieving the technical effect of improving the accuracy of reproducing the sound in the virtual scene, further reproducing the sound of the different sound sources in the first position according to the different position information, and further improving the flexibility of sound reproduction, further Accurate results of sound reproduction are guaranteed.
  • the large circle shown in FIG. 3 is the sound source detection range corresponding to the first virtual object (virtual character S1). It is assumed that two sources of sound source A and sound source B are within the sound source detection range of the virtual character S1, and the sound source C is outside the sound source detection range of the virtual character.
  • the terminal can detect the sound source A and the sound source B, but cannot detect the sound source C.
  • the terminal acquires the sound source position of the sound source A and the sound source B, and then acquires the transmission distance a of the sound source A from the virtual character S1 and the transmission distance b of the sound source B from the virtual character S1 according to the sound source position.
  • the target sound that the sound source A and the sound source B can reproduce at the position where the virtual character S1 is located (the center position shown in FIG. 3) can be further determined, and the target sound can be reproduced.
  • determining, by the terminal, the target sound to be reproduced by the sound source in the first position according to the first transmission distance comprises: determining, by the terminal, the virtual environment in which the first virtual object is currently located in the virtual scene; acquiring the sound of the sound source matching the virtual environment a curve, wherein the sound curve is used to indicate a correspondence between a sound triggered by the sound source and a transmission distance; and a target sound matching the first transmission distance is determined from the sound curve.
  • the sound curve may include, but is not limited to: 1) a correspondence between the volume of the sound triggered by the sound source and the transmission distance; 2) a correspondence between the pitch of the sound triggered by the sound source and the transmission distance.
  • the above is only an example, and the sound curve may also be fused to indicate the relationship between the sound triggered by the sound source and the transmission distance and time, which is not limited in this embodiment.
  • determining, by the terminal, the target sound to be reproduced by the sound source in the first position according to the first transmission distance may include: determining, by the terminal, a sound source type of the sound source, and acquiring a sound curve of the sound source matching the sound source type, determining the sound curve from the sound curve. A target sound that matches the transmission distance.
  • the above-mentioned sound source type may be, but not limited to, used to determine a sound curve used, that is, different sound source types will be configured to correspond to different sound curves.
  • FIG. 4 Two sound curves are shown in FIG. 4, wherein the first sound curve is a high-pitched sound curve, the sound curve has a slow decay speed and a long transmission distance, and the second sound curve is a low-tone sound curve. The curve decays fast and the transmission distance is close.
  • the terminal acquires the sound source type, different sound curves are matched according to the sound source type, and the target sound matching the first transmission distance is determined according to the corresponding sound curve.
  • the high-pitched sound curve and the low-pitched sound curve shown in FIG. 4 are only examples, and do not constitute a limitation on the present application.
  • the terminal determines, by the terminal, the location of the sound source where the sound source is located according to the sound source feature information, and acquiring the first transmission distance between the sound source location and the first location where the first virtual object is located, the terminal extracting the sound source feature information for indicating The sound source coordinate of the sound source position; the terminal calculates the first transmission distance according to the sound source coordinate and the position coordinate corresponding to the first position.
  • determining, by the terminal, the sound source location where the sound source is located according to the sound source feature information includes: the terminal extracting the sound source coordinates of the sound source from the sound source feature information. That is to say, after detecting the sound triggering event, the terminal can directly extract the coordinates of the sound source carried from the sound source feature information matched with each sound source. For example, after detecting the sound source A, the corresponding sound source coordinates can be extracted, such as (x A , y A ).
  • the first transmission distance between the location where the terminal acquires the sound source and the first location where the first virtual object is located may be, but is not limited to, the terminal acquiring the coordinates of the sound source and the coordinates of the first location. distance.
  • the coordinates of the first position are (x 1 , y 1 )
  • the distance between the two coordinates can be obtained, and not only the displacement distance between the sound source and the first position of the first virtual object can be obtained, but also The direction of the sound source relative to the first virtual object is obtained.
  • the sound source detection range of the first virtual object (such as the virtual character S1) is as shown in FIG. 5, and the sound source indicated by the sound triggering event detected by the virtual character S1 within the sound source detection range includes: : Source A, source B.
  • the terminal can extract the sound source coordinates (x A , y A ) corresponding to the sound source A , and obtain the coordinates (x 1 , y 1 ) of the first position where the virtual character S1 is located, and calculate the relationship between the two according to the above coordinates.
  • Transmission distance ie the first transmission distance: The contents shown in FIG. 5 are merely illustrative and are not intended to limit the application.
  • the reproducing the target sound in the first position in the virtual scene by the terminal includes: determining, in the case of detecting a sound source, the target sound to be reproduced by the sound source in the first position; and reproducing the target sound in the first position; In the case where at least two sound sources are detected, the terminal determines the target target sound to be reproduced by the at least two sound sources respectively at the first position; synthesizes the target target sound to obtain the target sound; and reproduces the target sound at the first position.
  • the target sound may be obtained according to at least one of the following strategies:
  • the terminal synthesizes the object target sounds to be reproduced by the respective sound sources at the first position according to a pre-configured ratio, to obtain a target sound;
  • the terminal acquires the target sound according to the pre-configured priority level from the object target sounds to be reproduced by the respective sound sources at the first position;
  • the terminal randomly acquires the target sound from the target target sound to be reproduced by the respective sound sources at the first position.
  • the acquisition strategy is set to remove the explosion sound.
  • the explosion sound can be removed, and the target target sounds to be reproduced by the other sound sources at the first position are synthesized to obtain the target sound.
  • the waterfall sound priority is set to be the highest, the terminal reproduces the waterfall sound only at the first position when the waterfall sound is detected, and ignores the target target sound to be reproduced by the other sound source at the first position, and does not reproduce.
  • the terminal detecting the sound triggering event may include, but is not limited to, at least one of the following: the terminal detects whether the first virtual object performs a sound triggering action, wherein the sound triggering action is used to generate a sound triggering event; and the terminal detects the first virtual object Whether the interacting second virtual object triggers a sound triggering event, wherein the second virtual object is controlled by the first virtual object; the terminal detects whether the third virtual object triggers a sound triggering event, wherein the fourth virtual object is used to control the third virtual object
  • the object and the first virtual object are associated objects in the virtual scene; the terminal detects whether the environment sound triggering object is included in the virtual environment in which the first virtual object is currently located, wherein the ambient sound triggering object is used to trigger the sound triggering event according to a predetermined period.
  • the first virtual object and the fourth virtual object may be, but are not limited to, an object corresponding to the virtual character controlled by the application client in the virtual scenario, where the association relationship between the first virtual object and the fourth virtual object may be Including but not limited to: comrades, enemies or other associations in the same virtual scene.
  • the foregoing second virtual object may be, but is not limited to, an object controlled by the first virtual object, such as an equipment (such as a door, a car, a firearm) in a virtual scene
  • the third virtual object may be, but not limited to, a fourth virtual object.
  • the first virtual object to the fourth virtual object only represent different virtual objects, and there is no limitation on the label or order of the virtual objects.
  • the method further includes: the terminal configuring a sound effect for the virtual object included in the virtual environment, wherein the sound effect is associated with the attribute of the virtual object, and the virtual object generates a sound trigger event after performing the triggering operation.
  • the foregoing attributes may include, but are not limited to, a material of a virtual object.
  • Virtual objects of different materials can be configured, but not limited to, to configure different sound effects, such as configuring different sound effects for stones and metal in a virtual scene to achieve the effect of simulating the sound of a real natural object.
  • the terminal detects the sound triggering event in the sound source detection range of the first virtual object, determines the sound source position through the sound source feature information in the sound triggering event, and obtains the sound source position and the first virtual object. a first transmission distance between the locations, the terminal determines a target sound to be reproduced according to the first transmission distance, and detects a sound triggering event in the audio detection range by the method of reproducing the target sound, and detects a sound triggering event
  • the sound source characteristic information in the detected sound triggering event accurately determines the sound source, and accurately acquires the target sound to be reproduced in the first position in the virtual scene according to the positional relationship between the sound source and the first virtual object, and It is no longer limited to obtaining the sound to be reproduced by a single means of recording reproduction, thereby achieving a technical effect of improving the accuracy of reproducing sound in a virtual scene, and solving the problem of low sound reproduction accuracy in the related art.
  • the terminal determines, according to the first transmission distance, that the target sound to be reproduced by the sound source in the first location includes:
  • the terminal acquires a sound curve of the sound source that matches the virtual environment, where the sound curve is used to indicate a correspondence between the sound triggered by the sound source and the transmission distance;
  • the terminal determines a target sound that matches the first transmission distance from the sound curve.
  • the correspondence between the virtual environment and the sound curve of the sound source may be preset, and the sound curve matching the virtual environment may be acquired according to different virtual environments.
  • the virtual environment can be a square, water, desert, or grass.
  • Four sound curves are shown in Figure 6, each of which corresponds to one or more environments.
  • the sound curve in the upper left corner is the sound curve of the square
  • the sound curve in the upper right corner is the sound curve in the water
  • the sound curve in the lower left corner is the sound curve of the desert
  • the sound curve in the lower right corner is the sound curve of the grass.
  • the terminal acquires different sound curves according to different environments, and acquires target sounds according to different sound curves.
  • the sound source has different target sounds in different environments, and the effect of adjusting the target sound according to the change of the environment is achieved, and the accuracy of the sound reproduction is improved.
  • the terminal before detecting the sound triggering event, the terminal further includes:
  • the terminal configures a sound curve of the sound source, wherein the sound curve comprises: a first curve, a second curve, the first curve is used to indicate a curve segment in which the sound triggered by the sound source does not generate attenuation, and the second curve is used to indicate that the sound source is triggered
  • the sound produces a decaying curve segment.
  • the sound source can be a virtual object in the game.
  • the virtual object emits sounds such as gunshots, squeaks, etc.
  • the sound decays slowly within a certain distance, thus forming the first curve in FIG.
  • the distance exceeds a certain distance, the sound attenuation speed is increased to form the second curve in FIG.
  • the attenuation speed and the boundary line of the first curve and the second curve are also different.
  • the sound source is a car
  • the sound transmitted by the car is far away. Therefore, the first curve is relatively longer. After a long distance, the sound begins to accelerate and decay, forming a second curve
  • the sound source is In a bicycle
  • the sound of the bicycle is transmitted at a short distance. Therefore, the first curve is relatively short. After a short distance, the sound begins to accelerate and decay, forming a second curve.
  • the attenuation speed of the sound is increased after a certain distance, thereby improving the accuracy of sound reproduction.
  • the terminal determines, from the sound curve, the target sound that matches the first transmission distance, including:
  • the terminal obtains the attenuation distance of the sound source from the sound curve, wherein after the attenuation distance is reached, the sound triggered by the sound source cannot be reproduced;
  • the terminal determines a target sound that matches the first transmission distance.
  • the first virtual object may be a virtual character controlled by the user, and the sound source may be a vehicle in the game.
  • the terminal acquires the target sound according to the sound curve corresponding to the sound emitted by the vehicle. If the distance of the vehicle from the avatar is too far, as shown in FIG. 7, the value of the target sound at this time is zero. At this time, even if the sound emitted by the vehicle is detected by the terminal, it is not heard by the avatar because the distance is too far. If the target sound corresponding to the transmission distance of the vehicle from the avatar is not zero, the avatar can hear the sound emitted by the vehicle.
  • the target sound is not reproduced in the case where the first transmission distance is excessively large, and the effect of improving the sound reproduction accuracy is achieved.
  • the terminal determines, according to the sound source characteristic information, a sound source location where the sound source is located, and obtains a first transmission distance between the sound source location and the first location where the first virtual object is located, including:
  • the terminal extracts a sound source coordinate for indicating a sound source position from the sound source feature information.
  • the terminal calculates the first transmission distance according to the sound source coordinates and the position coordinates corresponding to the first position.
  • the game application is taken as an example for description.
  • the first virtual object may be a virtual character in the game, and the sound source may be a vehicle.
  • a two-dimensional coordinate system is established in a plane where the virtual character is located.
  • the coordinates of A in the two-dimensional coordinate system are (4, 3), and the terminal calculates the distance from the vehicle to the avatar according to the coordinates of the vehicle, and the calculation result is 5.
  • the coordinates of the sound source are obtained by establishing a coordinate system, and the first transmission distance of the sound source to the first virtual object is calculated, so that the target sound can be accurately adjusted according to the first transmission distance, thereby improving the accuracy of the sound reproduction. Effect.
  • the terminal reproduces the target sound in the first position in the virtual scene, including:
  • the terminal determines a target sound to be reproduced by the sound source at the first position; and reproduces the target sound at the first position;
  • the terminal determines the target target sound to be reproduced by the at least two sound sources respectively at the first position; synthesizes the target target sound to obtain the target sound; and reproduces the target sound at the first position.
  • the terminal synthesizes the target target sound, and the obtained target sound includes at least one of the following:
  • S21 The terminal synthesizes the target target sound according to a pre-configured ratio to obtain a target sound
  • the terminal acquires a target sound according to a pre-configured priority from the target target sound.
  • the terminal randomly acquires a target sound from the target target sound.
  • the terminal may set a composite ratio for each object target sound, and when a plurality of target target sounds are acquired, the plurality of target target sounds are synthesized into the target sound according to a composite ratio set for each target target sound.
  • the sound source may be a vehicle in the game, a wind, a pistol, and the like.
  • the composite ratio of the sound of the vehicle is 0.3
  • the composite ratio of the wind sound is 0.2
  • the composite ratio of the sound of the pistol is 0.5.
  • the terminal may set a priority for the sound of the sound source, and the sounds of the different sound sources correspond to different priorities, and the sounds of the sound source with the higher priority are preferentially heard in the target sound, and the sound source with the higher priority exists. In the case of the emitted sound, the sound of the sound source having a lower priority is not heard or the sound is reduced.
  • the sound source can be a vehicle or a pistol.
  • the priority of the pistol is higher than the priority of the mob.
  • the terminal acquires the object target sound of the pistol and the vehicle, since the priority of the pistol is high, the sound of the pistol in the target sound is louder than the sound of the vehicle, or the sound of the vehicle is not heard.
  • S1 The terminal detects whether the first virtual object performs a sound triggering action, where the sound triggering action is used to generate a sound triggering event;
  • the terminal detects whether the second virtual object that interacts with the first virtual object triggers a sound triggering event, where the second virtual object is controlled by the first virtual object.
  • the terminal detects whether the third virtual object triggers a sound triggering event, where the fourth virtual object and the first virtual object used to control the third virtual object are associated objects in the virtual scene;
  • the terminal detects whether the environment sound triggering object is included in the virtual environment where the first virtual object is currently located, where the environment sound triggering object is used to trigger the sound triggering event according to a predetermined period.
  • the first virtual object may be a virtual character controlled by the first user
  • the second object may be a weapon of the virtual character controlled by the first user
  • the fourth virtual object may be controlled by other users.
  • the virtual character, the third virtual object may be a weapon of a virtual character controlled by other users
  • the ambient sound triggering object may be wind, rain, or the like.
  • the first user controls the avatar to move and emits a sound
  • the first user-controlled avatar triggers a sound triggering event
  • the first user controls the avatar to use a weapon
  • the weapon The sound of the departure triggers the event.
  • other users control the avatar to move, a sound is emitted, and other user-controlled avatars trigger a sound-triggering event.
  • Other users control the avatar to use a weapon, and the sound of the weapon's departure triggers the event. If there is wind in the environment, the wind triggers a sound trigger event.
  • the terminal detects whether the virtual environment in which the first virtual object, the second virtual object, the third virtual object, and the virtual object are triggered triggers a sound trigger event, thereby accurately detecting a sound trigger event to trigger an event according to the sound.
  • the purpose of obtaining the target sound is to improve the accuracy of the sound reproduction.
  • the method before the terminal detects the sound triggering event, the method further includes:
  • the terminal configures a sound effect for the virtual object included in the virtual scene, where the sound effect is associated with the attribute of the virtual object, and the virtual object generates a sound trigger event after performing the triggering operation.
  • the virtual object may be a virtual character, a virtual item, or the like in the game, such as a weapon, a vehicle, or the like.
  • a virtual character moves, uses a weapon, and uses a vehicle, it produces a sound effect correspondingly, and the terminal sets different sound effects according to the type of the virtual character or the virtual item. For example, when the vehicle is a car or a bicycle, the set sound effects are different.
  • the sound effect for the virtual object by configuring the sound effect for the virtual object, it is possible to detect the configured sound effect and synthesize the target sound, thereby realizing the effect of improving the flexibility of sound reproduction.
  • the method according to the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course, by hardware, but in many cases, the former is A better implementation.
  • the technical solution of the present application which is essential or contributes to the related art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk, CD-ROM).
  • the instructions include a number of instructions for causing a terminal device (which may be a cell phone, computer, server, or network device, etc.) to perform the methods of various embodiments of the present application.
  • the device includes:
  • the detecting unit 902 is configured to detect a sound triggering event in the sound source detecting range corresponding to the first virtual object in the virtual scene, wherein the sound triggering event carries the sound source characteristic information for triggering the sound source matching of the sound;
  • a determining unit 906 configured to determine, according to the first transmission distance, a target sound to be reproduced by the sound source at the first position;
  • the reproduction unit 908 is arranged to reproduce the target sound at the first position in the virtual scene.
  • the above-described sound reproducing apparatus may be, but not limited to, applied to a process of performing sound reproduction in a virtual scene, such as a virtual scene displayed in a game application.
  • the game application may include, but is not limited to, a Multiplayer Online Battle Arena (MOBA) or a Single-Player Game (abbreviated as SPG). This is not specifically limited.
  • the above game application may include, but is not limited to, at least one of the following: Three Dimension (3D) game application, Virtual Reality (VR) game application, Augmented Reality (AR) game application, mixed reality (Mixed Reality, referred to as MR) game application.
  • the virtual scene may be, but is not limited to, an interactive scene configured in a game application.
  • a virtual scene configured by a racing game includes a track and an end point
  • the virtual scene configured by the shooting game includes a target target, wherein the target target may be
  • the virtual objects (also called virtual characters) controlled by other online players participating in MOBA can also be non-player characters (Non-Player Characters, NPC for short) or machine characters in human-computer interaction.
  • the virtual scene may also include, but is not limited to, other objects for propelling the plot, such as a house, a vehicle, or a weather, a natural landscape, and the like that simulate a real environment. The above is only an example, and the embodiment does not limit this.
  • the game application is taken as an example, and it is assumed that the client of the current game application controls the first virtual object (such as the virtual character S1), and the virtual scene of the game application is a shooting scene, wherein the virtual scene includes different Virtual object.
  • a sound triggering event carrying the sound source feature information for matching the sound source of the triggering sound
  • determining the sound source according to the sound source characteristic information when the sound triggering event is detected eg, The sound source A
  • the target sound is used for the purpose of accurately reproducing the determined target sound in the first position in the virtual scene.
  • the sound triggering event in the audio detection range is detected by the method for reproducing the target sound, and in the case of detecting the sound triggering event, the event is triggered according to the detected sound.
  • the sound source characteristic information accurately determines the sound source, and accurately acquires the target sound to be reproduced in the first position in the virtual scene according to the positional relationship between the sound source and the first virtual object, and is no longer limited to a single means of recording and reproducing to obtain the desired sound.
  • the reproduced sound thereby achieving the technical effect of improving the accuracy of reproducing the sound in the virtual scene, further reproducing the sound of the different sound sources in the first position according to the different position information, and also improving the flexibility of the sound reproduction, further ensuring The exact result of sound reproduction.
  • the large circle shown in FIG. 3 is the sound source detection range corresponding to the first virtual object (virtual character S1). It is assumed that two sources of sound source A and sound source B are within the sound source detection range of the virtual character S1, and the sound source C is outside the sound source detection range of the virtual character.
  • the sound source A and the sound source B can be detected, but the sound source C cannot be detected.
  • the sound source positions of the sound source A and the sound source B are acquired, and then the transmission distance a of the sound source A from the virtual character S1 and the transmission distance b of the sound source B from the virtual character S1 are obtained according to the above-mentioned sound source position.
  • the target sound that the sound source A and the sound source B can reproduce at the position where the virtual character S1 is located (the center position shown in FIG. 3) can be further determined, and the target sound can be reproduced.
  • determining, according to the first transmission distance, the target sound to be reproduced by the sound source in the first position comprises: determining a virtual environment in which the first virtual object is currently located in the virtual scene; acquiring a sound curve of the sound source matching the virtual environment, The sound curve is used to indicate a correspondence between the sound triggered by the sound source and the transmission distance; and the target sound matching the first transmission distance is determined from the sound curve.
  • determining, according to the first transmission distance, the target sound to be reproduced by the sound source in the first position may include: determining a sound source type of the sound source, acquiring a sound curve of the sound source matching the sound source type, determining the first transmission from the sound curve The distance matches the target sound.
  • the above-mentioned sound source type may be, but not limited to, used to determine a sound curve used, that is, different sound source types will be configured to correspond to different sound curves.
  • FIG. 4 Two sound curves are shown in FIG. 4, wherein the first sound curve is a high-pitched sound curve, the sound curve has a slow decay speed and a long transmission distance, and the second sound curve is a low-tone sound curve. The curve decays fast and the transmission distance is close.
  • the sound source type is acquired, different sound curves are matched according to the sound source type, and the target sound matching the first transmission distance is determined according to the corresponding sound curve.
  • the high-pitched sound curve and the low-pitched sound curve shown in FIG. 4 are only examples, and do not constitute a limitation on the present application.
  • determining the location of the sound source where the sound source is located according to the sound source feature information, and acquiring the first transmission distance between the sound source location and the first location where the first virtual object is located includes: extracting the sound source feature information for indicating the sound source location The sound source coordinates; the first transmission distance is calculated according to the sound source coordinates and the position coordinates corresponding to the first position.
  • determining, according to the sound source feature information, the sound source location where the sound source is located includes: extracting sound source coordinates of the sound source from the sound source feature information. That is to say, after detecting the sound triggering event, the carried source source coordinates can be directly extracted from the sound source feature information matched with each sound source. For example, after detecting the sound source A, the corresponding sound source coordinates can be extracted, such as (x A , y A ).
  • acquiring the first transmission distance between the location of the sound source and the first location where the first virtual object is located may be, but is not limited to, obtaining a distance between the coordinates of the sound source and the coordinates of the first location.
  • the coordinates of the first position are (x 1 , y 1 )
  • the distance between the two coordinates can be obtained, and not only the displacement distance between the sound source and the first position of the first virtual object can be obtained, but also The direction of the sound source relative to the first virtual object is obtained.
  • the sound source detection range of the first virtual object (such as the virtual character S1) is as shown in FIG. 5, and the sound source indicated by the sound triggering event detected by the virtual character S1 within the sound source detection range includes: : Source A, source B.
  • the sound source coordinates (x A , y A ) corresponding to the sound source A can be extracted, and the coordinates (x 1 , y 1 ) of the first position where the virtual character S1 is located are obtained, and the coordinates are calculated according to the above coordinates.
  • Transmission distance ie the first transmission distance: The contents shown in FIG. 5 are merely illustrative and are not intended to limit the application.
  • reproducing the target sound in the first position in the virtual scene comprises: determining a target sound to be reproduced by the sound source at the first position in the case of detecting one sound source; reproducing the target sound in the first position; detecting In the case of at least two sound sources, the object target sound to be reproduced by the at least two sound sources at the first position is determined; the target object sound is synthesized to obtain the target sound; and the target sound is reproduced at the first position.
  • the acquisition strategy is set to remove the explosion sound, and when the explosion sound is detected, the explosion sound can be removed, and the target target sounds to be reproduced by the other sound sources at the first position are synthesized to obtain the target sound.
  • the waterfall sound priority is set to be the highest, when the waterfall sound is detected, the waterfall sound is reproduced only at the first position, and the target target sound to be reproduced by the other sound source at the first position is ignored, and is not reproduced.
  • detecting the sound triggering event may include, but is not limited to, detecting at least one of: detecting whether the first virtual object performs a sound triggering action, wherein the sound triggering action is used to generate a sound triggering event; and detecting the interaction with the first virtual object Whether the second virtual object is triggered by the first virtual object; and detecting whether the third virtual object triggers a sound triggering event, wherein the fourth virtual object is used to control the third virtual object and the first virtual object
  • the virtual object is an associated object in the virtual scene; detecting whether the ambient sound triggering object is included in the virtual environment in which the first virtual object is currently located, wherein the ambient sound triggering object is used to trigger the sound triggering event according to a predetermined period.
  • the first virtual object and the fourth virtual object may be, but are not limited to, an object corresponding to the virtual character controlled by the application client in the virtual scenario, where the association relationship between the first virtual object and the fourth virtual object may be Including but not limited to: comrades, enemies or other associations in the same virtual scene.
  • the foregoing second virtual object may be, but is not limited to, an object controlled by the first virtual object, such as an equipment (such as a door, a car, a firearm) in a virtual scene
  • the third virtual object may be, but not limited to, a fourth virtual object.
  • the first virtual object to the fourth virtual object only represent different virtual objects, and there is no limitation on the label or order of the virtual objects.
  • the method before detecting the sound triggering event, further includes: configuring a sound effect for the virtual object included in the virtual environment, wherein the sound effect is associated with the attribute of the virtual object, and the virtual object generates a sound trigger event after performing the triggering operation.
  • the foregoing attributes may include, but are not limited to, a material of a virtual object.
  • Virtual objects of different materials can be configured, but not limited to, to configure different sound effects, such as configuring different sound effects for stones and metal in a virtual scene to achieve the effect of simulating the sound of a real natural object.
  • the sound source triggering event in the sound triggering event is used to determine the sound source location by detecting the sound triggering event in the sound source detecting range of the first virtual object, and acquiring the sound source location and the first virtual object. a first transmission distance between the positions, determining a target sound to be reproduced according to the first transmission distance, and detecting a sound triggering event in the audio detection range by the method of reproducing the target sound, in the case of detecting a sound triggering event And accurately determining the sound source according to the sound source characteristic information in the detected sound triggering event, and accurately acquiring the target sound to be reproduced in the first position in the virtual scene according to the positional relationship between the sound source and the first virtual object, instead of It is limited to the sound to be reproduced by a single means of recording reproduction, thereby achieving a technical effect of improving the accuracy of reproducing sound in a virtual scene, and solving the problem of low sound reproduction accuracy in the related art.
  • the determining unit 906 includes:
  • a first determining module configured to determine a virtual environment in which the first virtual object is currently located in the virtual scene
  • the second determining module is configured to determine a target sound that matches the first transmission distance from the sound curve.
  • the correspondence between the virtual environment and the sound curve of the sound source may be preset, and the sound curve matching the virtual environment may be acquired according to different virtual environments.
  • the virtual environment can be a square, water, desert, or grass.
  • Four sound curves are shown in Figure 6, each of which corresponds to one or more environments.
  • the sound curve in the upper left corner is the sound curve of the square
  • the sound curve in the upper right corner is the sound curve in the water
  • the sound curve in the lower left corner is the sound curve of the desert
  • the sound curve in the lower right corner is the sound curve of the grass.
  • different sound curves are obtained, and the target sounds are obtained according to different sound curves.
  • the sound source has different target sounds in different environments, and the effect of adjusting the target sound according to the change of the environment is achieved, and the accuracy of the sound reproduction is improved.
  • the above device further includes:
  • the first configuration unit is configured to configure a sound curve of the sound source before detecting the sound triggering event, wherein the sound curve comprises: a first curve and a second curve, wherein the first curve is used to indicate that the sound triggered by the sound source is not generated.
  • the attenuated curve segment, the second curve is used to indicate the curve segment where the sound triggered by the sound source produces attenuation.
  • the sound source can be a virtual object in the game.
  • the virtual object emits sounds such as gunshots, squeaks, etc.
  • the sound decays slowly within a certain distance, thus forming the first curve in FIG.
  • the distance exceeds a certain distance, the sound attenuation speed is increased to form the second curve in FIG.
  • the attenuation speed and the boundary line of the first curve and the second curve are also different.
  • the sound source is a car
  • the sound transmitted by the car is far away. Therefore, the first curve is relatively longer. After a long distance, the sound begins to accelerate and decay, forming a second curve
  • the sound source is In a bicycle
  • the sound of the bicycle is transmitted at a short distance. Therefore, the first curve is relatively short. After a short distance, the sound begins to accelerate and decay, forming a second curve.
  • the attenuation speed of the sound is increased after a certain distance, thereby improving the accuracy of sound reproduction.
  • the second determining module includes:
  • the first acquisition sub-module is configured to obtain an attenuation distance of the sound source from the sound curve, wherein after the attenuation distance is reached, the sound triggered by the sound source cannot be reproduced;
  • the first virtual object may be a virtual character controlled by the user, and the sound source may be a vehicle in the game.
  • the sound curve corresponding to the sound emitted by the vehicle acquires the target sound. If the distance of the vehicle from the avatar is too far, as shown in FIG. 7, the value of the target sound at this time is zero. At this time, even if the sound emitted by the vehicle is detected, it is not heard by the avatar because the distance is too far. If the target sound corresponding to the transmission distance of the vehicle from the avatar is not zero, the avatar can hear the sound emitted by the vehicle.
  • the target sound is not reproduced in the case where the first transmission distance is excessively large, and the effect of improving the sound reproduction accuracy is achieved.
  • the obtaining unit 904 includes:
  • an extraction module configured to extract a sound source coordinate for indicating a sound source position from the sound source feature information
  • the calculation module is configured to calculate the first transmission distance according to the sound source coordinates and the position coordinates corresponding to the first position.
  • the coordinates of the sound source are obtained by establishing a coordinate system, and the first transmission distance of the sound source to the first virtual object is calculated, so that the target sound can be accurately adjusted according to the first transmission distance, thereby improving the accuracy of the sound reproduction. Effect.
  • the reproduction unit 908 includes:
  • a first reproduction module configured to determine, in the case where a sound source is detected, a target sound to be reproduced by the sound source at the first position; and to reproduce the target sound at the first position;
  • a second reproduction module configured to determine, in the case where at least two sound sources are detected, the target target sound to be reproduced by the at least two sound sources respectively at the first position; synthesize the target target sound to obtain the target sound; The target sound is reproduced in position.
  • the foregoing second rendering module includes at least one of the following:
  • a synthesis sub-module configured to synthesize the target target sound according to a pre-configured ratio to obtain a target sound
  • the third acquisition sub-module is configured to randomly acquire the target sound from the target target sound.
  • a composite ratio may be set for each object target sound, and when a plurality of object target sounds are acquired, a plurality of object target sounds are synthesized into a target sound according to a composite ratio set for each object target sound.
  • the sound source may be a vehicle in the game, a wind, a pistol, and the like.
  • the composite ratio of the sound of the vehicle is 0.3
  • the composite ratio of the wind sound is 0.2
  • the composite ratio of the sound of the pistol is 0.5.
  • the sound of the sound source may be prioritized, and the sounds of different sound sources correspond to different priorities, and the sounds of the sound source with high priority are preferentially heard in the target sound, and the sound source with high priority exists. In the case of a sound that is emitted, the sound of the source with a lower priority is not heard or the sound is reduced.
  • the sound source can be a vehicle or a pistol.
  • the priority of the pistol is higher than the priority of the mob.
  • the target sound of the pistol and the vehicle is acquired, since the priority of the pistol is high, the sound of the pistol in the target sound is louder than that of the vehicle, or the sound of the vehicle is not heard.
  • the target sound is obtained by adopting different methods, thereby improving the flexibility of acquiring the target sound, thereby achieving the effect of improving the flexibility of the sound source reproduction.
  • the detecting unit 902 includes at least one of the following:
  • the first detecting module is configured to detect whether the first virtual object performs a sound triggering action, wherein the sound triggering action is used to generate a sound triggering event;
  • a third detecting module configured to detect whether the third virtual object triggers a sound triggering event, wherein the fourth virtual object and the first virtual object for controlling the third virtual object are associated objects in the virtual scene;
  • the fourth detecting module is configured to detect whether the ambient sound triggering object is included in the virtual environment in which the first virtual object is currently located, wherein the ambient sound triggering object is configured to trigger the sound triggering event according to a predetermined period.
  • the first virtual object may be a virtual character controlled by the first user
  • the second object may be a weapon of the virtual character controlled by the first user
  • the fourth virtual object may be controlled by other users.
  • the virtual character, the third virtual object may be a weapon of a virtual character controlled by other users
  • the ambient sound triggering object may be wind, rain, or the like.
  • the first user controls the avatar to move and emits a sound
  • the first user-controlled avatar triggers a sound triggering event
  • the first user controls the avatar to use a weapon
  • the weapon The sound of the departure triggers the event.
  • other users control the avatar to move, a sound is emitted, and other user-controlled avatars trigger a sound-triggering event.
  • Other users control the avatar to use a weapon, and the sound of the weapon's departure triggers the event. If there is wind in the environment, the wind triggers a sound trigger event.
  • an accurate detection of the sound trigger event is achieved, so as to obtain an event according to the sound trigger event.
  • the purpose of the target sound is to improve the accuracy of sound reproduction.
  • the above device further includes:
  • the virtual object may be a virtual character, a virtual item, or the like in the game, such as a weapon, a vehicle, or the like.
  • a virtual character moves, uses a weapon, and uses a vehicle, it produces a sound effect correspondingly, and sets different sound effects according to the type of the virtual character or the virtual item. For example, when the vehicle is a car or a bicycle, the set sound effects are different.
  • the sound effect for the virtual object by configuring the sound effect for the virtual object, it is possible to detect the configured sound effect and synthesize the target sound, thereby realizing the effect of improving the flexibility of sound reproduction.
  • a storage medium having stored therein a computer program, wherein the computer program is configured to execute the steps of any one of the method embodiments described above.
  • the above storage medium may be configured to store a computer program for performing the following steps:
  • the target sound is reproduced at the first position in the virtual scene.
  • the above storage medium may be configured to store a computer program for performing the following steps:
  • the target sound that matches the first transmission distance is determined.
  • the above storage medium may be configured to store a computer program for performing the following steps:
  • the above storage medium may be configured to store a computer program for performing the following steps:
  • the above storage medium may be configured to store a computer program for performing the following steps:
  • the above storage medium may be configured to store a computer program for performing the following steps:
  • the storage medium is further configured to store a computer program for performing the steps included in the method in the above embodiments, which will not be described in detail in this embodiment.
  • the storage medium may include a flash disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk or an optical disk, and the like.
  • the electronic device for implementing the above sound reproduction method.
  • the electronic device includes a memory 1004 and a processor 1002.
  • the memory 1004 stores a computer.
  • the program, the processor 1002 is arranged to perform the steps in any of the above method embodiments by a computer program.
  • the foregoing electronic device may be located in at least one network device of the plurality of network devices of the computer network.
  • the target sound is reproduced at the first position in the virtual scene.
  • the structure shown in FIG. 10 is merely illustrative, and the electronic device may also be a smart phone (such as an Android mobile phone, an iOS mobile phone, etc.), a tablet computer, a palm computer, and a mobile Internet device (Mobile). Terminal devices such as Internet Devices, MID) and PAD.
  • FIG. 10 does not limit the structure of the above electronic device.
  • the electronic device may also include more or less components (such as a network interface, etc.) than shown in FIG. 10, or have a different configuration than that shown in FIG.
  • the memory 1004 can be used to store software programs and modules, such as the sound reproduction method and the program instruction/module corresponding to the device in the embodiment of the present application.
  • the processor 1002 executes each of the software programs and modules stored in the memory 1004.
  • a functional application and data processing, that is, the above-described sound reproduction method is implemented.
  • Memory 1004 can include high speed random access memory, and can also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory.
  • memory 1004 can further include memory remotely located relative to processor 1002, which can be connected to the terminal over a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the foregoing electronic device further includes a transmission device 1010, where the transmission device 1010 is configured to receive or send data via a network.
  • the transmission device 1010 includes a Network Interface Controller (NIC) that can be connected to other network devices and routers via a network cable to communicate with the Internet or a local area network.
  • the transmission device 1010 is a Radio Frequency (RF) module for communicating with the Internet by wireless.
  • NIC Network Interface Controller
  • RF Radio Frequency
  • the electronic device further includes: a user interface 1006 and a display 1008, wherein the display 1008 is configured to display a virtual scene and a corresponding virtual object, and the user interface 1006 is configured to acquire an operation corresponding to the operation.
  • the instructions, wherein the operations may include, but are not limited to, a touch screen operation, a click operation, a voice input operation, and the like.
  • the integrated unit in the above embodiment if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in the above-described computer readable storage medium.
  • the technical solution of the present application may be embodied in the form of a software product, or the whole or part of the technical solution, which is stored in the storage medium, including
  • the instructions are used to cause one or more computer devices (which may be a personal computer, server or network device, etc.) to perform all or part of the steps of the methods described in the various embodiments of the present application.
  • the disclosed client may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, unit or module, and may be electrical or otherwise.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the method for detecting a sound trigger event in the sound source detection range of the first virtual object is used to determine the sound source position by using the sound source feature information in the sound trigger event, and acquiring the sound source position and the first virtual object. a first transmission distance between the positions, determining a target sound to be reproduced according to the first transmission distance, thereby detecting a sound triggering event within the audio detection range, and detecting the sound according to the detected sound in the case of detecting the sound triggering event
  • the sound source feature information in the trigger event accurately determines the sound source, and accurately acquires the target sound to be reproduced in the first position in the virtual scene according to the positional relationship between the sound source and the first virtual object, and is no longer limited to a single reproduction by recording Means to acquire the sound to be reproduced, and achieve a technical effect of improving the accuracy of reproducing the sound in the virtual scene.

Abstract

一种声音再现方法和装置,该方法包括:终端在虚拟场景中与第一虚拟对象对应的音源检测范围内,检测声音触发事件,其中,声音触发事件中携带有用于与触发声音的音源匹配的音源特征信息(S202);在检测到声音触发事件的情况下,终端根据音源特征信息确定音源所在的音源位置,并获取音源位置与第一虚拟对象所在的第一位置之间的第一传输距离(S204);终端根据第一传输距离确定出音源在第一位置所要再现的目标声音(S206);终端在虚拟场景中的第一位置再现目标声音(S208)。该方法提高了声音再现的准确性。

Description

声音再现方法和装置、存储介质及电子装置
本申请要求于2018年02月09日提交中国专利局、优先权号为2018101359607、申请名称为“声音再现方法和装置、存储介质及电子装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机领域,具体而言,涉及一种声音再现方法和装置、存储介质及电子装置。
背景技术
如今,为了吸引更多用户下载使用应用客户端,很多终端应用的应用开发商都非常关注用户在运行应用客户端时的视听体验。其中,为了给用户提供身临其境的视听感受,很多终端应用都采用了立体声录音,然后再通过扬声器再现空间声音的方式,从而实现为用户提供空间化的听觉效果。
然而,在一些人机交互应用中,例如游戏应用,虚拟场景中所涉及的虚拟对象数量较多,若仍然使用上述方法提供的方式进行声音再现,不仅操作较复杂,而且无法保证虚拟场景中再现出的声音的真实准确性。
也就是说,相关技术提供的声音再现方法存在声音再现准确性较低的问题。
针对上述的问题,目前尚未提出有效的解决方案。
发明内容
本申请实施例提供一种声音再现方法和装置、存储介质及电子装置,以至少解决相关技术提供的声音再现方法存在声音再现准确性较低的技术问题。
根据本申请实施例的一个方面,提供了一种声音再现方法,包括:终端在虚拟场景中与第一虚拟对象对应的音源检测范围内,检测声音触发事件,其中,上述声音触发事件中携带有用于与触发声音的音源匹配的音源特征信息;在检测到上述声音触发事件的情况下,终端根据上述音源特征信息确定上述音源所在的音源位置,并获取上述音源位置与上述第一虚拟对象所在的第一位置之间的第一传输距离;终端根据上述第一传输距离确定出上述音源在上述第一位置所要再现的目标声音;终端在上述虚拟场景中的上述第一位置再现上述目标声音。
根据本申请实施例的另一方面,还提供了一种声音再现装置,应用于终端,包括:检测单元,设置为在虚拟场景中与第一虚拟对象对应的音源检测范围内,检测声音触发事件,其中,上述声音触发事件中携带有用于与触发声音的音源匹配的音源特征信息;获取单元,设置为在检测到上述声音触发事件的情况下,根据上述音源特征信息确定上述音源所在的音源位置,并获取上述音源位置与上述第一虚拟对象所在的第一位置之间的第一传输距离;确定单元,设置为根据上述第一传输距离确定出上述音源在上述第一位置所要再现的目标声音;再现单元,设置为在上述虚拟场景中的上述第一位置再现上述目标声音。
根据本申请的实施例的又一方面,还提供了一种存储介质,该存储介质中存储有计算机程序,其中,该计算机程序被设置为运行时执行上述方法。
根据本申请实施例的又一方面,还提供了一种电子装置,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其中,上述处理器通过计算机程序执行上述的方法。
在本申请实施例中,采用检测第一虚拟对象的音源检测范围内的声音触发事件的方式,终端通过声音触发事件中的音源特征信息确定音源位置,并获取音源位置与第一虚拟对象所在的第一位置之间的第一传输距离,终端根据第一传输距离确定要再现的目标声音,通过上述再现目标声音的方 法,对音频检测范围内的声音触发事件进行检测,终端在检测到声音触发事件的情况下,根据检测到的声音触发事件中的音源特征信息准确确定出音源,并根据音源与第一虚拟对象之间的位置关系准确获取到在虚拟场景中第一位置所要再现的目标声音,而不再限于通过录音再现的单一手段来获取所要再现的声音,从而实现提高在虚拟场景中再现声音的准确性的技术效果,进而解决了相关技术提供的声音再现方法存在声音再现准确性较低的技术问题。
附图说明
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:
图1是根据本申请实施例的一种声音再现方法的应用环境的示意图;
图2是根据本申请实施例的一种可选的声音再现方法的流程示意图;
图3是根据本申请实施例的一种可选的声音再现方法的示意图;
图4是根据本申请实施例的另一种可选的声音再现方法的示意图;
图5是根据本申请实施例的又一种可选的声音再现方法的示意图;
图6是根据本申请实施例的又一种可选的声音再现方法的示意图;
图7是根据本申请实施例的又一种可选的声音再现方法的示意图;
图8是根据本申请实施例的又一种可选的声音再现方法的示意图;
图9是根据本申请实施例的一种可选的声音再现装置的结构示意图;
图10是根据本申请实施例的一种可选的电子装置的结构示意图。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请 实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分的实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本申请保护的范围。
需要说明的是,本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
根据本申请实施例的一个方面,提供了一种声音再现方法,可选地,上述声音再现方法可以但不限于应用于如图1所示的应用环境中。在终端102的虚拟场景中,在第一虚拟对象的音源检测范围内,检测到声音触发事件。根据声音触发事件中携带的音源特征信息,确定音源A、音源B、音源C的位置。并根据音源A的位置,获取音源A到第一虚拟对象的距离a;以及根据音源B的位置,获取音源B到第一虚拟对象的距离b;根据音源C的位置,获取音源C到第一虚拟对象的距离c。根据上述距离a、距离b、距离c确定对应的需要在第一位置再现的目标声音,并在第一虚拟对象所在的第一位置再现目标声音。
可选地,在本实施例中,上述终端可以包括但不限于以下至少之一:手机、平板电脑、笔记本电脑等。
可选地,作为一种可选的实施方式,如图2所示,上述声音再现方法可以包括:
S202,终端在虚拟场景中与第一虚拟对象对应的音源检测范围内,检测声音触发事件,其中,声音触发事件中携带有用于与触发声音的音源匹 配的音源特征信息;
S204,在检测到声音触发事件的情况下,终端根据音源特征信息确定音源所在的音源位置,并获取音源位置与第一虚拟对象所在的第一位置之间的第一传输距离;
S206,终端根据第一传输距离确定出音源在第一位置所要再现的目标声音;
S208,终端在虚拟场景中的第一位置再现目标声音。
可选地,上述声音再现方法可以但不限于应用于在虚拟场景中进行声音再现的过程中,如应用于游戏应用中所显示的虚拟场景。其中,上述游戏应用可以包括但不限于多人在线战术竞技游戏(Multiplayer Online Battle Arena,简称为MOBA)或者为单机游戏(Single-Player Game,简称为SPG)。在此不做具体限定。上述游戏应用可以包括但不限于以下至少之一:三维(Three Dimension,简称3D)游戏应用、虚拟现实(Virtual Reality,简称VR)游戏应用、增强现实(Augmented Reality,简称AR)游戏应用、混合现实(Mixed Reality,简称MR)游戏应用。上述虚拟场景可以但不限于游戏应用中所配置的交互场景,如竞速类游戏配置的虚拟场景中包括赛道、终点,如射击类游戏配置的虚拟场景中包括目标靶子,其中,目标靶子可以为MOBA中共同参与的其他在线玩家所控制的虚拟对象(也可称虚拟角色),也可以为非玩家角色(Non-Player Character,简称为NPC),也可以为人机交互中的机器角色。此外,在虚拟场景中还可以包括但不限于用于推动剧情的其他对象,如模拟真实环境所设置的房屋、交通工具,或天气、自然景观等等。以上只是一种示例,本实施例对此不作任何限定。
例如,以游戏应用为例进行说明,假设当前游戏应用的客户端控制第一虚拟对象(如虚拟角色S1),该游戏应用的虚拟场景为射击类场景,其中,在该虚拟场景中包括不同的虚拟对象。终端在与虚拟角色S1对应的音源检测范围内,检测携带有用于与触发声音的音源匹配的音源特征信息 的声音触发事件,在检测到声音触发事件的情况下,终端根据上述音源特征信息确定音源(如音源A)所在的音源位置,并获取该音源位置与上述虚拟角色S1所在的第一位置之间的第一传输距离,从而实现根据上述第一传输距离准确确定出音源在第一位置所要再现的目标声音,以达到在虚拟场景中第一位置准确再现出所确定的目标声音的目的。
需要说明的是,在本实施例中,通过上述再现目标声音的方法,终端对音频检测范围内的声音触发事件进行检测,在检测到声音触发事件的情况下,终端根据检测到的声音触发事件中的音源特征信息准确确定出音源,并根据音源与第一虚拟对象之间的位置关系准确获取到在虚拟场景中第一位置所要再现的目标声音,而不再限于通过录音再现的单一手段来获取所要再现的声音,从而实现提高在虚拟场景中再现声音的准确性的技术效果,进一步,根据不同的位置信息在第一位置再现出不同音源的声音,还提高了声音再现的灵活性,进一步保证了声音再现的准确结果。
例如,结合图3所示示例进行说明。图3所示大圆为第一虚拟对象(虚拟角色S1)对应的音源检测范围。假设音源A、音源B两个音源处于虚拟角色S1的音源检测范围之内,而音源C处于虚拟人物的音源检测范围之外。当游戏应用客户端所控制的第一虚拟对象在检测声音触发事件的过程中,则终端可以检测到音源A和音源B,但无法检测到音源C。进一步,终端获取音源A和音源B的音源位置,然后根据上述音源位置获取音源A距离虚拟角色S1的传输距离a,以及音源B距离虚拟角色S1的传输距离b。根据上述传输距离a和传输距离b,可进一步确定出音源A和音源B在虚拟角色S1所在的位置(如图3所示圆心位置)可再现的目标声音,并再现该目标声音。
可选地,终端根据第一传输距离确定出音源在第一位置所要再现的目标声音包括:终端确定在虚拟场景中第一虚拟对象当前所处的虚拟环境;获取与虚拟环境匹配的音源的声音曲线,其中,声音曲线用于指示音源所触发的声音与传输距离之间的对应关系;从声音曲线中确定出与第一传输 距离匹配的目标声音。
需要说明的是,上述声音曲线可以包括但不限于:1)音源所触发的声音的音量与传输距离之间的对应关系;2)音源所触发的声音的音调与传输距离之间的对应关系。上述仅是一种示例,在声音曲线还可以融合时间,用于表示音源所触发的声音与传输距离、时间之间的关系,本实施例中对此不做任何限定。
可选地,终端根据第一传输距离确定出音源在第一位置所要再现的目标声音可以包括:终端确定音源的音源类型,获取音源类型匹配的音源的声音曲线,从声音曲线中确定出与第一传输距离匹配的目标声音。
可选地,在本实施例中,上述音源类型可以但不限于用于确定所使用的声音曲线,也就是说,不同的音源类型将被配置对应不同的声音曲线。
具体结合图4进行说明。图4中示出了两种声音曲线,其中,第一种声音曲线是高声调的声音曲线,该声音曲线衰减速度慢,传输距离远,第二种声音曲线是低声调的声音曲线,该声音曲线衰减速度快,传输距离近。当终端获取到音源类型时,根据音源类型匹配不同的声音曲线,并根据相应的声音曲线确定与第一传输距离匹配的目标声音。需要说明的是,上述图4所示的高声调声音曲线与低声调声音曲线仅为实例,并不构成对本申请的限定。
可选地,终端根据音源特征信息确定音源所在的音源位置,并获取音源位置与第一虚拟对象所在的第一位置之间的第一传输距离包括:终端从音源特征信息中提取出用于指示音源位置的音源坐标;终端根据音源坐标及第一位置对应的位置坐标,计算出第一传输距离。
可选地,在本实施例中,终端根据音源特征信息确定音源所在的音源位置包括:终端从音源特征信息中提取出该音源的音源坐标。也就是说,在检测声音触发事件后,终端可以从与每个音源匹配的音源特征信息中直接提取出所携带的音源坐标,例如,检测音源A后,可提取对应的音源坐 标,如(x A,y A)。
可选地,在本实施例中,终端获取音源位置与第一虚拟对象所在的第一位置之间的第一传输距离可以但不限于:终端获取上述音源坐标与第一位置的坐标之间的距离。例如,假设第一位置的坐标为(x 1,y 1),则可以获取上述两个坐标之间的距离,不仅可以得到音源相对第一虚拟对象所在第一位置之间的位移距离,还可以得到音源相对第一虚拟对象的方向。以便于准确确定出音源相对第一虚拟对象的位置变化,从而达到根据上述位置变化,从上述声音曲线中准确确定出音源在第一位置所要再现的目标声音的目的。
具体结合图5进行说明:如图5所示大圆为第一虚拟对象(如虚拟角色S1)的音源检测范围,假设虚拟角色S1在音源检测范围之内检测到的声音触发事件所指示的音源包括:音源A、音源B。以音源A为例说明,终端可以提取音源A对应的音源坐标(x A,y A),获取虚拟角色S1所在第一位置的坐标(x 1,y 1),根据上述坐标计算二者之间的传输距离(即第一传输距离):
Figure PCTCN2018117149-appb-000001
这里图5中所示的内容仅为举例说明,并不构成对本申请的限定。
可选地,终端在虚拟场景中的第一位置再现目标声音包括:在检测出一个音源的情况下,终端确定一个音源在第一位置所要再现的目标声音;在第一位置上再现目标声音;在检测出至少两个音源的情况下,终端确定至少两个音源分别在第一位置所要再现的对象目标声音;合成对象目标声音,得到目标声音;在第一位置上再现目标声音。
可选地,在终端检测到至少两个音源的情况下,可以根据以下至少一种策略获取目标声音:
1)终端按照预先配置的比例,将各个音源在第一位置所要再现的对象目标声音合成,得到目标声音;
2)终端从各个音源在第一位置所要再现的对象目标声音中按照预先 配置的优先级获取目标声音;
3)终端从各个音源在第一位置所要再现的对象目标声音中随机获取目标声音。
例如,设定获取策略为去除爆炸声,当终端检测到爆炸声时,则可以将该爆炸声去除,将剩下其他音源在第一位置所要再现的对象目标声音合成,以得到目标声音。又例如,设定瀑布声优先级最高,则终端在检测到瀑布声时,仅在第一位置再现瀑布声,将剩下其他音源在第一位置所要再现的对象目标声音忽略,不再现。
可选地,终端检测声音触发事件可以包括但不限于以下至少之一:终端检测第一虚拟对象是否执行声音触发动作,其中,声音触发动作用于生成声音触发事件;终端检测与第一虚拟对象交互的第二虚拟对象是否触发声音触发事件,其中,第二虚拟对象受第一虚拟对象控制;终端检测第三虚拟对象是否触发声音触发事件,其中,用于控制第三虚拟对象的第四虚拟对象与第一虚拟对象为在虚拟场景中的关联对象;终端检测第一虚拟对象当前所处的虚拟环境中是否包括环境声音触发对象,其中,环境声音触发对象用于按照预定周期触发声音触发事件。
其中,上述第一虚拟对象与第四虚拟对象可以但不限于为在虚拟场景中通过应用客户端控制的虚拟角色对应的对象,其中,第一虚拟对象与第四虚拟对象之间的关联关系可以包括但不限于:战友、敌人或其他在同一虚拟场景中的关联关系。
此外,上述第二虚拟对象可以但不限于为第一虚拟对象所控制的对象,如虚拟场景中的装备(如门、车、枪支)等;上述第三虚拟对象可以但不限于为第四虚拟对象所控制的对象,如虚拟场景中的装备(如门、车、枪支)等。需要说明的是,上述第一虚拟对象至第四虚拟对象仅表示不同的虚拟对象,并不存在对虚拟对象的标号或顺序的限定。
可选地,在终端检测声音触发事件之前,还包括:终端为虚拟环境中 所包括的虚拟对象配置音效,其中,音效与虚拟对象的属性关联,虚拟对象在执行触发操作后将生成声音触发事件。
需要说明的是,在本实施例中,上述属性可以包括但不限于虚拟对象的材质。不同材质的虚拟对象可以但不限于配置不同的音效,如为虚拟场景中的石块、金属配置不同的音效,以达到模拟真实的自然对象的声音的效果。
通过本申请实施例,采用终端检测第一虚拟对象的音源检测范围内的声音触发事件的方式,通过声音触发事件中的音源特征信息确定音源位置,并获取音源位置与第一虚拟对象所在的第一位置之间的第一传输距离,终端根据第一传输距离确定要再现的目标声音,通过上述再现目标声音的方法,对音频检测范围内的声音触发事件进行检测,在检测到声音触发事件的情况下,根据检测到的声音触发事件中的音源特征信息准确确定出音源,并根据音源与第一虚拟对象之间的位置关系准确获取到在虚拟场景中第一位置所要再现的目标声音,而不再限于通过录音再现的单一手段来获取所要再现的声音,从而实现提高在虚拟场景中再现声音的准确性的技术效果,解决了相关技术中声音再现准确性低的问题。
作为一种可选的实施方案,终端根据第一传输距离确定出音源在第一位置所要再现的目标声音包括:
S1,终端确定在虚拟场景中第一虚拟对象当前所处的虚拟环境;
S2,终端获取与虚拟环境匹配的音源的声音曲线,其中,声音曲线用于指示音源所触发的声音与传输距离之间的对应关系;
S3,终端从声音曲线中确定出与第一传输距离匹配的目标声音。
可选地,可以预先设定虚拟环境与音源的声音曲线的对应关系,根据不同的虚拟环境获取与虚拟环境匹配的声音曲线。
例如,虚拟环境可以为广场、水中、沙漠、草地。结合图6进行说明。图6中示出了四种声音曲线,其中每一种声音曲线对应一种或多种环境。 其中,左上角的声音曲线是广场的声音曲线,右上角的声音曲线是水中的声音曲线,左下角的声音曲线是沙漠的声音曲线,右下角的声音曲线为草地的声音曲线。终端根据不同的环境,获取不同的声音曲线,并根据不同的声音曲线,获取目标声音。
需要说明的是,上述虚拟环境为广场、水中、沙漠、草地等仅为一种示例,虚拟环境也可以为其他环境。图6中的四个声音曲线仅为了说明,具体的声音曲线的变化趋势需要根据实际进行设定。
通过本实施例,通过为不同的环境设置不同的声音曲线,从而使音源在不同的环境下有不同的目标声音,达到了根据环境的改变而调整目标声音的效果,提高了声音再现的准确性。
作为一种可选的实施方案,终端在检测声音触发事件之前,还包括:
S1,终端配置音源的声音曲线,其中,声音曲线包括:第一曲线、第二曲线,第一曲线用于指示音源所触发的声音未产生衰减的曲线段,第二曲线用于指示音源所触发的声音产生衰减的曲线段。
继续以游戏应用进行说明,例如,音源可以为游戏中的虚拟对象,当虚拟对象发出如枪声、吼声等声音时,声音在一定距离内的衰减速度慢,于是形成图7中的第一曲线,当超过一定的距离时,声音衰减速度加快,形成图7中的第二曲线。
需要说明的是,根据音源发出声音的不同,第一曲线与第二曲线的衰减速度与分界线也不同。
例如,音源为一辆汽车,则汽车发出的声音的传输距离远,因此,第一曲线相对更长,在经过较长的距离后,声音才开始加速衰减,形成第二曲线,而如果音源为一辆自行车,自行车发出的声音的传输距离近,因此,第一曲线要相对短一些,在经过较短的距离后,声音就开始加速衰减,形成第二曲线。
需要说明的是,上述举例内容与图7记载的内容仅为了解释本申请, 并不构成对本申请的限定。
通过本实施例,通过在为音源配置的声音曲线中配置第一曲线与第二曲线,从而在经过一段距离后声音的衰减速度加快,从而提高了声音再现的准确性。
作为一种可选的实施方案,终端从声音曲线中确定出与第一传输距离匹配的目标声音包括:
S1,终端从声音曲线中获取音源的衰减距离,其中,在到达衰减距离后,音源所触发的声音将无法被再现;
S2,在第一传输距离小于衰减距离的情况下,终端确定出与第一传输距离匹配的目标声音。
继续以游戏应用为例进行说明。第一虚拟对象可以为用户控制的虚拟人物,音源可以为游戏中的交通工具。当游戏中的交通工具发出的声音处于虚拟人物的音源检测范围内时,终端根据交通工具发出的声音对应的声音曲线获取目标声音。如果交通工具距离虚拟人物的距离过远,则如图7所示,此时的目标声音的值为零。此时,即使交通工具发出的声音被终端检测到,也由于距离太远不会被虚拟人物听到。如果交通工具距离虚拟人物的传输距离所对应的目标声音不为零,则虚拟人物可以听到交通工具所发出的声音。
通过本实施例,通过根据衰减距离确定是否再现目标声音,从而在第一传输距离过大的情况下,不再现目标声音,达到了提高声音再现准确性的效果。
作为一种可选的实施方案,终端根据音源特征信息确定音源所在的音源位置,并获取音源位置与第一虚拟对象所在的第一位置之间的第一传输距离包括:
S1,终端从音源特征信息中提取出用于指示音源位置的音源坐标;
S2,终端根据音源坐标及第一位置对应的位置坐标,计算出第一传输距离。
可选地,继续以游戏应用为例进行说明,第一虚拟对象可以为游戏中的虚拟人物,音源可以为交通工具,如图8所示,以虚拟人物所在平面建立二维坐标系,交通工具A所在的位置在二维坐标系中的坐标为(4,3),则终端根据交通工具所在的坐标,计算交通工具到虚拟人物的距离,计算结果为5。
通过本实施例,通过建立坐标系的方法获取音源的坐标,并计算音源到第一虚拟对象的第一传输距离,从而可以根据第一传输距离准确调整目标声音,实现了提高声音再现的准确性的效果。
作为一种可选的实施方案,终端在虚拟场景中的第一位置再现目标声音包括:
S1,在检测出一个音源的情况下,终端确定一个音源在第一位置所要再现的目标声音;在第一位置上再现目标声音;
S2,在检测出至少两个音源的情况下,终端确定至少两个音源分别在第一位置所要再现的对象目标声音;合成对象目标声音,得到目标声音;在第一位置上再现目标声音。
可选地,终端合成对象目标声音,得到目标声音包括以下至少之一:
S21,终端按照预先配置的比例合成对象目标声音,得到目标声音;
S22,终端从对象目标声音中按照预先配置的优先级获取目标声音;
S23,终端从对象目标声音中随机获取目标声音。
可选地,终端可以为每一个对象目标声音设置合成比例,当获取到多个对象目标声音时,根据为每一个对象目标声音设置的合成比例,将多个对象目标声音合成为目标声音。
例如,继续以游戏应用为例进行说明,音源可以是游戏中的交通工具, 风,手枪等。交通工具的声音的合成比例为0.3,风声的合成比例为0.2,手枪的声音的合成比例为0.5。当终端获取到交通工具、风、手枪的对象目标声音时,将对象目标声音与对应的合成比例做乘法,再将做乘法后的对象目标声音合成为目标声音。
可选地,终端可以为音源的声音设置优先级,不同的音源的声音对应不同的优先级,优先级高的音源所发出的声音在目标声音中优先被听到,在存在优先级高的音源所发出的声音的情况下,优先级低的音源的声音不会被听到或者声音变小。
例如,继续以游戏应用为例进行说明,音源可以为交通工具或手枪。手枪的优先级要高于小怪的优先级。当终端获取到手枪与交通工具的对象目标声音后,因为手枪的优先级要高,因此,目标声音中手枪的声音要比交通工具的声音大,或者听不到交通工具的声音。
通过本实施例,通过采用不同的方法获取目标声音,从而提高了获取目标声音的灵活性,进而实现了提高音源再现的灵活性的效果。
作为一种可选的实施方案,终端检测声音触发事件包括以下至少之一:
S1,终端检测第一虚拟对象是否执行声音触发动作,其中,声音触发动作用于生成声音触发事件;
S2,终端检测与第一虚拟对象交互的第二虚拟对象是否触发声音触发事件,其中,第二虚拟对象受第一虚拟对象控制;
S3,终端检测第三虚拟对象是否触发声音触发事件,其中,用于控制第三虚拟对象的第四虚拟对象与第一虚拟对象为在虚拟场景中的关联对象;
S4,终端检测第一虚拟对象当前所处的虚拟环境中是否包括环境声音触发对象,其中,环境声音触发对象用于按照预定周期触发声音触发事件。
例如,继续以游戏应用为例进行说明,第一虚拟对象可以为第一用户 控制的虚拟人物,第二对象可以为第一用户控制的虚拟人物的武器,第四虚拟对象可以为其他用户控制的虚拟人物,第三虚拟对象可以为其他用户控制的虚拟人物的武器,环境声音触发对象可以为风、雨等。
例如,在进行一局游戏时,若第一用户控制虚拟人物进行移动时,发出声音,则第一用户控制的虚拟人物触发了声音触发事件,第一用户控制虚拟人物使用了武器,则该武器出发了的声音触发事件。若其他用户控制虚拟人物进行移动时,发出了声音,则其他用户控制的虚拟人物触发了声音触发事件,其他用户控制虚拟人物使用了武器,则该武器出发了的声音触发事件。如果环境中有风,则风会触发声音触发事件。
通过本实施例,通过终端检测第一虚拟对象、第二虚拟对象、第三虚拟对象与虚拟对象所在的虚拟环境是否触发了声音触发事件,从而实现了准确检测声音触发事件,以根据声音触发事件获取目标声音的目的,提高了声音再现的准确性。
作为一种可选的实施方案,在终端检测声音触发事件之前,还包括:
S1,终端为虚拟场景中所包括的虚拟对象配置音效,其中,音效与虚拟对象的属性关联,虚拟对象在执行触发操作后将生成声音触发事件。
例如,继续以游戏应用为例进行说明,虚拟对象可以为游戏中的虚拟人物、虚拟物品等,例如武器,交通工具等。当虚拟人物进行移动,使用武器,使用交通工具时,都对应产生音效,终端根据虚拟人物或虚拟物品的类型,设置不同的音效。例如,交通工具为汽车或自行车时,设置的音效不同。
通过本实施例,通过为虚拟对象配置音效,从而可以对配置的音效进行检测,并合成目标声音,从而实现了提高声音再现的灵活性的效果。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序 或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于可选实施例,所涉及的动作和模块并不一定是本申请所必须的。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到根据上述实施例的方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例的方法。
根据本申请实施例的另一个方面,还提供了一种用于实施上述声音再现方法的声音再现装置,应用于终端中。如图9所示,该装置包括:
1)检测单元902,设置为在虚拟场景中与第一虚拟对象对应的音源检测范围内,检测声音触发事件,其中,声音触发事件中携带有用于触发声音的音源匹配的音源特征信息;
2)获取单元904,设置为在检测到声音触发事件的情况下,根据音源特征信息确定音源所在的音源位置,并获取音源位置与第一虚拟对象所在的第一位置之间的第一传输距离;
3)确定单元906,设置为根据第一传输距离确定出音源在第一位置所要再现的目标声音;
4)再现单元908,设置为在虚拟场景中的第一位置再现目标声音。
可选地,上述声音再现装置可以但不限于应用于在虚拟场景中进行声音再现的过程中,如应用于游戏应用中所显示的虚拟场景。其中,上述游戏应用可以包括但不限于多人在线战术竞技游戏(Multiplayer Online Battle Arena,简称为MOBA)或者为单机游戏(Single-Player Game,简 称为SPG)。在此不做具体限定。上述游戏应用可以包括但不限于以下至少之一:三维(Three Dimension,简称3D)游戏应用、虚拟现实(Virtual Reality,简称VR)游戏应用、增强现实(Augmented Reality,简称AR)游戏应用、混合现实(Mixed Reality,简称MR)游戏应用。上述虚拟场景可以但不限于游戏应用中所配置的交互场景,如竞速类游戏配置的虚拟场景中包括赛道、终点,如射击类游戏配置的虚拟场景中包括目标靶子,其中,目标靶子可以为MOBA中共同参与的其他在线玩家所控制的虚拟对象(也可称虚拟角色),也可以为非玩家角色(Non-Player Character,简称为NPC),也可以为人机交互中的机器角色。此外,在虚拟场景中还可以包括但不限于用于推动剧情的其他对象,如模拟真实环境所设置的房屋、交通工具,或天气、自然景观等等。以上只是一种示例,本实施例对此不作任何限定。
例如,以游戏应用为例进行说明,假设当前游戏应用的客户端控制第一虚拟对象(如虚拟角色S1),该游戏应用的虚拟场景为射击类场景,其中,在该虚拟场景中包括不同的虚拟对象。在与虚拟角色S1对应的音源检测范围内,检测携带有用于与触发声音的音源匹配的音源特征信息的声音触发事件,在检测到声音触发事件的情况下,根据上述音源特征信息确定音源(如音源A)所在的音源位置,并获取该音源位置与上述虚拟角色S1所在的第一位置之间的第一传输距离,从而实现根据上述第一传输距离准确确定出音源在第一位置所要再现的目标声音,以达到在虚拟场景中第一位置准确再现出所确定的目标声音的目的。
需要说明的是,在本实施例中,通过上述再现目标声音的方法,对音频检测范围内的声音触发事件进行检测,在检测到声音触发事件的情况下,根据检测到的声音触发事件中的音源特征信息准确确定出音源,并根据音源与第一虚拟对象之间的位置关系准确获取到在虚拟场景中第一位置所要再现的目标声音,而不再限于通过录音再现的单一手段来获取所要再现的声音,从而实现提高在虚拟场景中再现声音的准确性的技术效果,进一步,根据不同的位置信息在第一位置再现出不同音源的声音,还提高了声 音再现的灵活性,进一步保证了声音再现的准确结果。
例如,结合图3所示示例进行说明。图3所示大圆为第一虚拟对象(虚拟角色S1)对应的音源检测范围。假设音源A、音源B两个音源处于虚拟角色S1的音源检测范围之内,而音源C处于虚拟人物的音源检测范围之外。当游戏应用客户端所控制的第一虚拟对象在检测声音触发事件的过程中,则可以检测到音源A和音源B,但无法检测到音源C。进一步,获取音源A和音源B的音源位置,然后根据上述音源位置获取音源A距离虚拟角色S1的传输距离a,以及音源B距离虚拟角色S1的传输距离b。根据上述传输距离a和传输距离b,可进一步确定出音源A和音源B在虚拟角色S1所在的位置(如图3所示圆心位置)可再现的目标声音,并再现该目标声音。
可选地,根据第一传输距离确定出音源在第一位置所要再现的目标声音包括:确定在虚拟场景中第一虚拟对象当前所处的虚拟环境;获取与虚拟环境匹配的音源的声音曲线,其中,声音曲线用于指示音源所触发的声音与传输距离之间的对应关系;从声音曲线中确定出与第一传输距离匹配的目标声音。
需要说明的是,上述声音曲线可以包括但不限于:1)音源所触发的声音的音量与传输距离之间的对应关系;2)音源所触发的声音的音调与传输距离之间的对应关系。上述仅是一种示例,在声音曲线还可以融合时间,用于表示音源所触发的声音与传输距离、时间之间的关系,本实施例中对此不做任何限定。
可选地,根据第一传输距离确定出音源在第一位置所要再现的目标声音可以包括:确定音源的音源类型,获取音源类型匹配的音源的声音曲线,从声音曲线中确定出与第一传输距离匹配的目标声音。
可选地,在本实施例中,上述音源类型可以但不限于用于确定所使用的声音曲线,也就是说,不同的音源类型将被配置对应不同的声音曲线。
具体结合图4进行说明。图4中示出了两种声音曲线,其中,第一种声音曲线是高声调的声音曲线,该声音曲线衰减速度慢,传输距离远,第二种声音曲线是低声调的声音曲线,该声音曲线衰减速度快,传输距离近。当获取到音源类型时,根据音源类型匹配不同的声音曲线,并根据相应的声音曲线确定与第一传输距离匹配的目标声音。需要说明的是,上述图4所示的高声调声音曲线与低声调声音曲线仅为实例,并不构成对本申请的限定。
可选地,根据音源特征信息确定音源所在的音源位置,并获取音源位置与第一虚拟对象所在的第一位置之间的第一传输距离包括:从音源特征信息中提取出用于指示音源位置的音源坐标;根据音源坐标及第一位置对应的位置坐标,计算出第一传输距离。
可选地,在本实施例中,根据音源特征信息确定音源所在的音源位置包括:从音源特征信息中提取出该音源的音源坐标。也就是说,在检测声音触发事件后,可以从与每个音源匹配的音源特征信息中直接提取出所携带的音源坐标,例如,检测音源A后,可提取对应的音源坐标,如(x A,y A)。
可选地,在本实施例中,获取音源位置与第一虚拟对象所在的第一位置之间的第一传输距离可以但不限于:获取上述音源坐标与第一位置的坐标之间的距离。例如,假设第一位置的坐标为(x 1,y 1),则可以获取上述两个坐标之间的距离,不仅可以得到音源相对第一虚拟对象所在第一位置之间的位移距离,还可以得到音源相对第一虚拟对象的方向。以便于准确确定出音源相对第一虚拟对象的位置变化,从而达到根据上述位置变化,从上述声音曲线中准确确定出音源在第一位置所要再现的目标声音的目的。
具体结合图5进行说明:如图5所示大圆为第一虚拟对象(如虚拟角色S1)的音源检测范围,假设虚拟角色S1在音源检测范围之内检测到的声音触发事件所指示的音源包括:音源A、音源B。以音源A为例说明, 可以提取音源A对应的音源坐标(x A,y A),获取虚拟角色S1所在第一位置的坐标(x 1,y 1),根据上述坐标计算二者之间的传输距离(即第一传输距离):
Figure PCTCN2018117149-appb-000002
这里图5中所示的内容仅为举例说明,并不构成对本申请的限定。
可选地,在虚拟场景中的第一位置再现目标声音包括:在检测出一个音源的情况下,确定一个音源在第一位置所要再现的目标声音;在第一位置上再现目标声音;在检测出至少两个音源的情况下,确定至少两个音源分别在第一位置所要再现的对象目标声音;合成对象目标声音,得到目标声音;在第一位置上再现目标声音。
可选地,在检测到至少两个音源的情况下,可以根据以下至少一种策略获取目标声音:
1)按照预先配置的比例,将各个音源在第一位置所要再现的对象目标声音合成,得到目标声音;
2)从各个音源在第一位置所要再现的对象目标声音中按照预先配置的优先级获取目标声音;
3)从各个音源在第一位置所要再现的对象目标声音中随机获取目标声音。
例如,设定获取策略为去除爆炸声,当检测到爆炸声时,则可以将该爆炸声去除,将剩下其他音源在第一位置所要再现的对象目标声音合成,以得到目标声音。又例如,设定瀑布声优先级最高,则在检测到瀑布声时,仅在第一位置再现瀑布声,将剩下其他音源在第一位置所要再现的对象目标声音忽略,不再现。
可选地,检测声音触发事件可以包括但不限于以下至少之一:检测第一虚拟对象是否执行声音触发动作,其中,声音触发动作用于生成声音触发事件;检测与第一虚拟对象交互的第二虚拟对象是否触发声音触发事件,其中,第二虚拟对象受第一虚拟对象控制;检测第三虚拟对象是否触发声 音触发事件,其中,用于控制第三虚拟对象的第四虚拟对象与第一虚拟对象为在虚拟场景中的关联对象;检测第一虚拟对象当前所处的虚拟环境中是否包括环境声音触发对象,其中,环境声音触发对象用于按照预定周期触发声音触发事件。
其中,上述第一虚拟对象与第四虚拟对象可以但不限于为在虚拟场景中通过应用客户端控制的虚拟角色对应的对象,其中,第一虚拟对象与第四虚拟对象之间的关联关系可以包括但不限于:战友、敌人或其他在同一虚拟场景中的关联关系。
此外,上述第二虚拟对象可以但不限于为第一虚拟对象所控制的对象,如虚拟场景中的装备(如门、车、枪支)等;上述第三虚拟对象可以但不限于为第四虚拟对象所控制的对象,如虚拟场景中的装备(如门、车、枪支)等。需要说明的是,上述第一虚拟对象至第四虚拟对象仅表示不同的虚拟对象,并不存在对虚拟对象的标号或顺序的限定。
可选地,在检测声音触发事件之前,还包括:为虚拟环境中所包括的虚拟对象配置音效,其中,音效与虚拟对象的属性关联,虚拟对象在执行触发操作后将生成声音触发事件。
需要说明的是,在本实施例中,上述属性可以包括但不限于虚拟对象的材质。不同材质的虚拟对象可以但不限于配置不同的音效,如为虚拟场景中的石块、金属配置不同的音效,以达到模拟真实的自然对象的声音的效果。
通过本申请实施例,采用检测第一虚拟对象的音源检测范围内的声音触发事件的方式,通过声音触发事件中的音源特征信息确定音源位置,并获取音源位置与第一虚拟对象所在的第一位置之间的第一传输距离,根据第一传输距离确定要再现的目标声音,通过上述再现目标声音的方法,对音频检测范围内的声音触发事件进行检测,在检测到声音触发事件的情况下,根据检测到的声音触发事件中的音源特征信息准确确定出音源,并根据音源与第一虚拟对象之间的位置关系准确获取到在虚拟场景中第一位 置所要再现的目标声音,而不再限于通过录音再现的单一手段来获取所要再现的声音,从而实现提高在虚拟场景中再现声音的准确性的技术效果,解决了相关技术中声音再现准确性低的问题。
作为一种可选的方案,上述确定单元906包括:
(1)第一确定模块,设置为确定在虚拟场景中第一虚拟对象当前所处的虚拟环境;
(2)获取模块,设置为获取与虚拟环境匹配的音源的声音曲线,其中,声音曲线用于指示音源所触发的声音与传输距离之间的对应关系;
(3)第二确定模块,设置为从声音曲线中确定出与第一传输距离匹配的目标声音。
可选地,可以预先设定虚拟环境与音源的声音曲线的对应关系,根据不同的虚拟环境获取与虚拟环境匹配的声音曲线。
例如,虚拟环境可以为广场、水中、沙漠、草地。结合图6进行说明。图6中示出了四种声音曲线,其中每一种声音曲线对应一种或多种环境。其中,左上角的声音曲线是广场的声音曲线,右上角的声音曲线是水中的声音曲线,左下角的声音曲线是沙漠的声音曲线,右下角的声音曲线为草地的声音曲线。根据不同的环境,获取不同的声音曲线,并根据不同的声音曲线,获取目标声音。
需要说明的是,上述虚拟环境为广场、水中、沙漠、草地等仅为一种示例,虚拟环境也可以为其他环境。图6中的四个声音曲线仅为了说明,具体的声音曲线的变化趋势需要根据实际进行设定。
通过本实施例,通过为不同的环境设置不同的声音曲线,从而使音源在不同的环境下有不同的目标声音,达到了根据环境的改变而调整目标声音的效果,提高了声音再现的准确性。
作为一种可选的方案,上述装置还包括:
(1)第一配置单元,设置为在检测声音触发事件之前,配置音源的声音曲线,其中,声音曲线包括:第一曲线、第二曲线,第一曲线用于指示音源所触发的声音未产生衰减的曲线段,第二曲线用于指示音源所触发的声音产生衰减的曲线段。
继续以游戏应用进行说明,例如,音源可以为游戏中的虚拟对象,当虚拟对象发出如枪声、吼声等声音时,声音在一定距离内的衰减速度慢,于是形成图7中的第一曲线,当超过一定的距离时,声音衰减速度加快,形成图7中的第二曲线。
需要说明的是,根据音源发出声音的不同,第一曲线与第二曲线的衰减速度与分界线也不同。
例如,音源为一辆汽车,则汽车发出的声音的传输距离远,因此,第一曲线相对更长,在经过较长的距离后,声音才开始加速衰减,形成第二曲线,而如果音源为一辆自行车,自行车发出的声音的传输距离近,因此,第一曲线要相对短一些,在经过较短的距离后,声音就开始加速衰减,形成第二曲线。
需要说明的是,上述举例内容与图7记载的内容仅为了解释本申请,并不构成对本申请的限定。
通过本实施例,通过在为音源配置的声音曲线中配置第一曲线与第二曲线,从而在经过一段距离后声音的衰减速度加快,从而提高了声音再现的准确性。
作为一种可选的方案,第二确定模块包括:
(1)第一获取子模块,设置为从声音曲线中获取音源的衰减距离,其中,在到达衰减距离后,音源所触发的声音将无法被再现;
(2)确定子模块,设置为在第一传输距离小于衰减距离的情况下,确定出与第一传输距离匹配的目标声音。
继续以游戏应用为例进行说明。第一虚拟对象可以为用户控制的虚拟人物,音源可以为游戏中的交通工具。当游戏中的交通工具发出的声音处于虚拟人物的音源检测范围内时,将交通工具发出的声音对应的声音曲线获取目标声音。如果交通工具距离虚拟人物的距离过远,则如图7所示,此时的目标声音的值为零。此时,即使交通工具发出的声音被检测到,也由于距离太远不会被虚拟人物听到。如果交通工具距离虚拟人物的传输距离所对应的目标声音不为零,则虚拟人物可以听到交通工具所发出的声音。
通过本实施例,通过根据衰减距离确定是否再现目标声音,从而在第一传输距离过大的情况下,不再现目标声音,达到了提高声音再现准确性的效果。
作为一种可选的方案,获取单元904包括:
(1)提取模块,设置为从音源特征信息中提取出用于指示音源位置的音源坐标;
(2)计算模块,设置为根据音源坐标及第一位置对应的位置坐标,计算出第一传输距离。
可选地,继续以游戏应用为例进行说明,第一虚拟对象可以为游戏中的虚拟人物,音源可以为交通工具,如图8所示,以虚拟人物所在平面建立二维坐标系,交通工具A所在的位置在二维坐标系中的坐标为(4,3),则根据交通工具所在的坐标,计算交通工具到虚拟人物的距离,计算结果为5。
通过本实施例,通过建立坐标系的方法获取音源的坐标,并计算音源到第一虚拟对象的第一传输距离,从而可以根据第一传输距离准确调整目标声音,实现了提高声音再现的准确性的效果。
作为一种可选的方案,再现单元908包括:
(1)第一再现模块,设置为在检测出一个音源的情况下,确定一个音源在第一位置所要再现的目标声音;在第一位置上再现目标声音;
(2)第二再现模块,设置为在检测出至少两个音源的情况下,确定至少两个音源分别在第一位置所要再现的对象目标声音;合成对象目标声音,得到目标声音;在第一位置上再现目标声音。
可选地,上述第二再现模块包括以下至少之一:
(1)合成子模块,设置为按照预先配置的比例合成对象目标声音,得到目标声音;
(2)第二获取子模块,设置为从对象目标声音中按照预先配置的优先级获取目标声音;
(3)第三获取子模块,设置为从对象目标声音中随机获取目标声音。
可选地,可以为每一个对象目标声音设置合成比例,当获取到多个对象目标声音时,根据为每一个对象目标声音设置的合成比例,将多个对象目标声音合成为目标声音。
例如,继续以游戏应用为例进行说明,音源可以是游戏中的交通工具,风,手枪等。交通工具的声音的合成比例为0.3,风声的合成比例为0.2,手枪的声音的合成比例为0.5。当获取到交通工具、风、手枪的对象目标声音时,将对象目标声音与对应的合成比例做乘法,再将做乘法后的对象目标声音合成为目标声音。
可选地,可以为音源的声音设置优先级,不同的音源的声音对应不同的优先级,优先级高的音源所发出的声音在目标声音中优先被听到,在存在优先级高的音源所发出的声音的情况下,优先级低的音源的声音不会被听到或者声音变小。
例如,继续以游戏应用为例进行说明,音源可以为交通工具或手枪。手枪的优先级要高于小怪的优先级。当获取到手枪与交通工具的对象目标声音后,因为手枪的优先级要高,因此,目标声音中手枪的声音要比交通工具的声音大,或者听不到交通工具的声音。
通过本实施例,通过采用不同的方法获取目标声音,从而提高了获取目标声音的灵活性,进而实现了提高音源再现的灵活性的效果。
作为一种可选的方案,检测单元902包括以下至少之一:
(1)第一检测模块,设置为检测第一虚拟对象是否执行声音触发动作,其中,声音触发动作用于生成声音触发事件;
(2)第二检测模块,设置为检测与第一虚拟对象交互的第二虚拟对象是否触发声音触发事件,其中,第二虚拟对象受第一虚拟对象控制;
(3)第三检测模块,设置为检测第三虚拟对象是否触发声音触发事件,其中,用于控制第三虚拟对象的第四虚拟对象与第一虚拟对象为在虚拟场景中的关联对象;
(4)第四检测模块,设置为检测第一虚拟对象当前所处的虚拟环境中是否包括环境声音触发对象,其中,环境声音触发对象用于按照预定周期触发声音触发事件。
例如,继续以游戏应用为例进行说明,第一虚拟对象可以为第一用户控制的虚拟人物,第二对象可以为第一用户控制的虚拟人物的武器,第四虚拟对象可以为其他用户控制的虚拟人物,第三虚拟对象可以为其他用户控制的虚拟人物的武器,环境声音触发对象可以为风、雨等。
例如,在进行一局游戏时,若第一用户控制虚拟人物进行移动时,发出声音,则第一用户控制的虚拟人物触发了声音触发事件,第一用户控制虚拟人物使用了武器,则该武器出发了的声音触发事件。若其他用户控制虚拟人物进行移动时,发出了声音,则其他用户控制的虚拟人物触发了声音触发事件,其他用户控制虚拟人物使用了武器,则该武器出发了的声音触发事件。如果环境中有风,则风会触发声音触发事件。
通过本实施例,通过检测第一虚拟对象、第二虚拟对象、第三虚拟对象与虚拟对象所在的虚拟环境是否触发了声音触发事件,从而实现了准确检测声音触发事件,以根据声音触发事件获取目标声音的目的,提高了声 音再现的准确性。
作为一种可选的方案,上述装置还包括:
(1)第二配置单元,设置为在检测声音触发事件之前,为虚拟场景中所包括的虚拟对象配置音效,其中,音效与虚拟对象的属性关联,虚拟对象在执行触发操作后将生成声音触发事件。
例如,继续以游戏应用为例进行说明,虚拟对象可以为游戏中的虚拟人物、虚拟物品等,例如武器,交通工具等。当虚拟人物进行移动,使用武器,使用交通工具时,都对应产生音效,根据虚拟人物或虚拟物品的类型,设置不同的音效。例如,交通工具为汽车或自行车时,设置的音效不同。
通过本实施例,通过为虚拟对象配置音效,从而可以对配置的音效进行检测,并合成目标声音,从而实现了提高声音再现的灵活性的效果。
根据本申请实施例的又一方面,还提供了一种存储介质,该存储介质中存储有计算机程序,其中,该计算机程序被设置为运行时执行上述任一项方法实施例中的步骤。
可选地,在本实施例中,上述存储介质可以被设置为存储用于执行以下步骤的计算机程序:
S1,在虚拟场景中与第一虚拟对象对应的音源检测范围内,检测声音触发事件,其中,声音触发事件中携带有用于与触发声音的音源匹配的音源特征信息;
S2,在检测到声音触发事件的情况下,根据音源特征信息确定音源所在的音源位置,并获取音源位置与第一虚拟对象所在的第一位置之间的第一传输距离;
S3,根据第一传输距离确定出音源在第一位置所要再现的目标声音;
S4,在虚拟场景中的第一位置再现目标声音。
可选地,在本实施例中,上述存储介质可以被设置为存储用于执行以下步骤的计算机程序:
S1,确定在虚拟场景中第一虚拟对象当前所处的虚拟环境;
S2,获取与虚拟环境匹配的音源的声音曲线,其中,声音曲线用于指示音源所触发的声音与传输距离之间的对应关系;
S3,从声音曲线中确定出与第一传输距离匹配的目标声音。
可选地,在本实施例中,上述存储介质可以被设置为存储用于执行以下步骤的计算机程序:
S1,配置音源的声音曲线,其中,声音曲线包括:第一曲线、第二曲线,第一曲线用于指示音源所触发的声音未产生衰减的曲线段,第二曲线用于指示音源所触发的声音产生衰减的曲线段。
可选地,在本实施例中,上述存储介质可以被设置为存储用于执行以下步骤的计算机程序:
S1,从声音曲线中获取音源的衰减距离,其中,在到达衰减距离后,音源所触发的声音将无法被再现;
S2,在第一传输距离小于衰减距离的情况下,确定出与第一传输距离匹配的目标声音。
可选地,在本实施例中,上述存储介质可以被设置为存储用于执行以下步骤的计算机程序:
S1,从音源特征信息中提取出用于指示音源位置的音源坐标;
S2,根据音源坐标及第一位置对应的位置坐标,计算出第一传输距离。
可选地,在本实施例中,上述存储介质可以被设置为存储用于执行以下步骤的计算机程序:
S1,在检测出一个音源的情况下,确定一个音源在第一位置所要再现的目标声音;在第一位置上再现目标声音;
S2,在检测出至少两个音源的情况下,确定至少两个音源分别在第一位置所要再现的对象目标声音;合成对象目标声音,得到目标声音;在第一位置上再现目标声音。
可选地,在本实施例中,上述存储介质可以被设置为存储用于执行以下步骤的计算机程序:
S1,按照预先配置的比例合成对象目标声音,得到目标声音;
S2,从对象目标声音中按照预先配置的优先级获取目标声音;
S3,从对象目标声音中随机获取目标声音。
可选地,在本实施例中,上述存储介质可以被设置为存储用于执行以下步骤的计算机程序:
S1,检测第一虚拟对象是否执行声音触发动作,其中,声音触发动作用于生成声音触发事件;
S2,检测与第一虚拟对象交互的第二虚拟对象是否触发声音触发事件,其中,第二虚拟对象受第一虚拟对象控制;
S3,检测第三虚拟对象是否触发声音触发事件,其中,用于控制第三虚拟对象的第四虚拟对象与第一虚拟对象为在虚拟场景中的关联对象;
S4,检测第一虚拟对象当前所处的虚拟环境中是否包括环境声音触发对象,其中,环境声音触发对象用于按照预定周期触发声音触发事件。
可选地,在本实施例中,上述存储介质可以被设置为存储用于执行以下步骤的计算机程序:
S1,为虚拟环境中所包括的虚拟对象配置音效,其中,音效与虚拟对象的属性关联,虚拟对象在执行触发操作后将生成声音触发事件。
可选地,存储介质还被设置为存储用于执行上述实施例中的方法中所包括的步骤的计算机程序,本实施例中对此不再赘述。
可选地,在本实施例中,本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令终端设备相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:闪存盘、只读存储器(Read-Only Memory,ROM)、随机存取器(Random Access Memory,RAM)、磁盘或光盘等。
根据本申请实施例的又一个方面,还提供了一种用于实施上述声音再现方法的电子装置,如图10所示,该电子装置包括存储器1004和处理器1002,该存储器1004中存储有计算机程序,该处理器1002被设置为通过计算机程序执行上述任一项方法实施例中的步骤。
可选地,在本实施例中,上述电子装置可以位于计算机网络的多个网络设备中的至少一个网络设备。
可选地,在本实施例中,上述处理器可以被设置为通过计算机程序执行以下步骤:
S1,在虚拟场景中与第一虚拟对象对应的音源检测范围内,检测声音触发事件,其中,声音触发事件中携带有用于与触发声音的音源匹配的音源特征信息;
S2,在检测到声音触发事件的情况下,根据音源特征信息确定音源所在的音源位置,并获取音源位置与第一虚拟对象所在的第一位置之间的第一传输距离;
S3,根据第一传输距离确定出音源在第一位置所要再现的目标声音;
S4,在虚拟场景中的第一位置再现目标声音。
可选地,本领域普通技术人员可以理解,图10所示的结构仅为示意,电子装置也可以是智能手机(如Android手机、iOS手机等)、平板电脑、 掌上电脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等终端设备。图10其并不对上述电子装置的结构造成限定。例如,电子装置还可包括比图10中所示更多或者更少的组件(如网络接口等),或者具有与图10所示不同的配置。
其中,存储器1004可用于存储软件程序以及模块,如本申请实施例中的声音再现方法和装置对应的程序指令/模块,处理器1002通过运行存储在存储器1004内的软件程序以及模块,从而执行各种功能应用以及数据处理,即实现上述的声音再现方法。存储器1004可包括高速随机存储器,还可以包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器1004可进一步包括相对于处理器1002远程设置的存储器,这些远程存储器可以通过网络连接至终端。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
可选地,在本实施例中,上述电子装置还包括传输装置1010,该传输装置1010用于经由一个网络接收或者发送数据。上述的网络具体实例可包括有线网络及无线网络。在一个实例中,传输装置1010包括一个网络适配器(Network Interface Controller,NIC),其可通过网线与其他网络设备与路由器相连从而可与互联网或局域网进行通讯。在一个实例中,传输装置1010为射频(Radio Frequency,简称为RF)模块,其用于通过无线方式与互联网进行通讯。
可选地,在本实施例中,上述电子装置还包括:用户接口1006及显示器1008,其中,上述显示器1008用于显示虚拟场景及对应的虚拟对象,上述用户接口1006用于获取操作对应的操作指令,其中,上述操作可以包括但不限于:触屏操作、点击操作、语音输入操作等。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
上述实施例中的集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在上述计算机可读取的存储介质中。基于这样的理解,本申请的技术方案本质上或者说对相关技术做出贡献的 部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在存储介质中,包括若干指令用以使得一台或多台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。
在本申请的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的客户端,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
以上所述仅是本申请的可选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。
工业实用性
在本申请实施例中,采用检测第一虚拟对象的音源检测范围内的声音触发事件的方式,通过声音触发事件中的音源特征信息确定音源位置,并获取音源位置与第一虚拟对象所在的第一位置之间的第一传输距离,根据 第一传输距离确定要再现的目标声音,从而对音频检测范围内的声音触发事件进行检测,在检测到声音触发事件的情况下,根据检测到的声音触发事件中的音源特征信息准确确定出音源,并根据音源与第一虚拟对象之间的位置关系准确获取到在虚拟场景中第一位置所要再现的目标声音,而不再限于通过录音再现的单一手段来获取所要再现的声音,实现提高在虚拟场景中再现声音的准确性的技术效果。

Claims (20)

  1. 一种声音再现方法,包括:
    终端在虚拟场景中与第一虚拟对象对应的音源检测范围内,检测声音触发事件,其中,所述声音触发事件中携带有用于与触发声音的音源匹配的音源特征信息;
    在检测到所述声音触发事件的情况下,所述终端根据所述音源特征信息确定所述音源所在的音源位置,并获取所述音源位置与所述第一虚拟对象所在的第一位置之间的第一传输距离;
    所述终端根据所述第一传输距离确定出所述音源在所述第一位置所要再现的目标声音;
    所述终端在所述虚拟场景中的所述第一位置再现所述目标声音。
  2. 根据权利要求1所述的方法,其中,所述终端根据所述第一传输距离确定出所述音源在所述第一位置所要再现的目标声音包括:
    所述终端确定在所述虚拟场景中所述第一虚拟对象当前所处的虚拟环境;
    所述终端获取与所述虚拟环境匹配的所述音源的声音曲线,其中,所述声音曲线用于指示所述音源所触发的声音与传输距离之间的对应关系;
    所述终端从所述声音曲线中确定出与所述第一传输距离匹配的所述目标声音。
  3. 根据权利要求2所述的方法,其中,在所述终端检测声音触发事件之前,还包括:
    所述终端配置所述音源的所述声音曲线,其中,所述声音曲线包括:第一曲线、第二曲线,所述第一曲线用于指示所述音源所触发的声音未产生衰减的曲线段,所述第二曲线用于指示所述音源所触发的声音产生衰减的曲线段。
  4. 根据权利要求2所述的方法,其中,所述终端从所述声音曲线中确定出与所述第一传输距离匹配的所述目标声音包括:
    所述终端从所述声音曲线中获取所述音源的衰减距离,其中,在到达所述衰减距离后,所述音源所触发的声音将无法被再现;
    在所述第一传输距离小于所述衰减距离的情况下,所述终端确定出与所述第一传输距离匹配的所述目标声音。
  5. 根据权利要求1所述的方法,其中,所述终端根据所述音源特征信息确定所述音源所在的音源位置,并获取所述音源位置与所述第一虚拟对象所在的第一位置之间的第一传输距离包括:
    所述终端从所述音源特征信息中提取出用于指示所述音源位置的音源坐标;
    所述终端根据所述音源坐标及所述第一位置对应的位置坐标,计算出所述第一传输距离。
  6. 根据权利要求1所述的方法,其中,所述终端在所述虚拟场景中的所述第一位置再现所述目标声音包括:
    在检测出一个所述音源的情况下,所述终端确定一个所述音源在所述第一位置所要再现的所述目标声音;在所述第一位置上再现所述目标声音;
    在检测出至少两个所述音源的情况下,所述终端确定至少两个所述音源分别在所述第一位置所要再现的对象目标声音;合成所述对象目标声音,得到所述目标声音;在所述第一位置上再现所述目标声音。
  7. 根据权利要求6所述的方法,其中,所述终端合成所述对象目标声音,得到所述目标声音包括以下至少之一:
    所述终端按照预先配置的比例合成所述对象目标声音,得到所述目标声音;
    所述终端从所述对象目标声音中按照预先配置的优先级获取所述目标声音;
    所述终端从所述对象目标声音中随机获取所述目标声音。
  8. 根据权利要求1所述的方法,其中,所述终端检测声音触发事件包括以下至少之一:
    所述终端检测所述第一虚拟对象是否执行声音触发动作,其中,所述声音触发动作用于生成所述声音触发事件;
    所述终端检测与所述第一虚拟对象交互的第二虚拟对象是否触发所述声音触发事件,其中,所述第二虚拟对象受所述第一虚拟对象控制;
    所述终端检测第三虚拟对象是否触发所述声音触发事件,其中,用于控制所述第三虚拟对象的第四虚拟对象与所述第一虚拟对象为在所述虚拟场景中的关联对象;
    所述终端检测所述第一虚拟对象当前所处的虚拟环境中是否包括环境声音触发对象,其中,所述环境声音触发对象用于按照预定周期触发所述声音触发事件。
  9. 根据权利要求1所述的方法,其中,在所述终端检测声音触发事件之前,还包括:
    所述终端为所述虚拟场景中所包括的虚拟对象配置音效,其中,所述音效与所述虚拟对象的属性关联,所述虚拟对象在执行触发操作后将生成所述声音触发事件。
  10. 一种声音再现装置,应用于终端,包括:
    检测单元,设置为在虚拟场景中与第一虚拟对象对应的音源检测范围内,检测声音触发事件,其中,所述声音触发事件中携带有用于与触发声音的音源匹配的音源特征信息;
    获取单元,设置为在检测到所述声音触发事件的情况下,根据所 述音源特征信息确定所述音源所在的音源位置,并获取所述音源位置与所述第一虚拟对象所在的第一位置之间的第一传输距离;
    确定单元,设置为根据所述第一传输距离确定出所述音源在所述第一位置所要再现的目标声音;
    再现单元,设置为在所述虚拟场景中的所述第一位置再现所述目标声音。
  11. 根据权利要求10所述的装置,其中,所述确定单元包括:
    第一确定模块,设置为确定在所述虚拟场景中所述第一虚拟对象当前所处的虚拟环境;
    获取模块,设置为获取与所述虚拟环境匹配的所述音源的声音曲线,其中,所述声音曲线用于指示所述音源所触发的声音与传输距离之间的对应关系;
    第二确定模块,设置为从所述声音曲线中确定出与所述第一传输距离匹配的所述目标声音。
  12. 根据权利要求11所述的装置,其中,所述装置还包括:
    第一配置单元,设置为在所述检测声音触发事件之前,配置所述音源的所述声音曲线,其中,所述声音曲线包括:第一曲线、第二曲线,所述第一曲线用于指示所述音源所触发的声音未产生衰减的曲线段,所述第二曲线用于指示所述音源所触发的声音产生衰减的曲线段。
  13. 根据权利要求11所述的装置,其中,所述第二确定模块包括:
    第一获取子模块,设置为从所述声音曲线中获取所述音源的衰减距离,其中,在到达所述衰减距离后,所述音源所触发的声音将无法被再现;
    确定子模块,设置为在所述第一传输距离小于所述衰减距离的情况下,确定出与所述第一传输距离匹配的所述目标声音。
  14. 根据权利要求10所述的装置,其中,所述获取单元包括:
    提取模块,设置为从所述音源特征信息中提取出用于指示所述音源位置的音源坐标;
    计算模块,设置为根据所述音源坐标及所述第一位置对应的位置坐标,计算出所述第一传输距离。
  15. 根据权利要求10所述的装置,其中,所述再现单元包括:
    第一再现模块,设置为在检测出一个所述音源的情况下,确定一个所述音源在所述第一位置所要再现的所述目标声音;在所述第一位置上再现所述目标声音;
    第二再现模块,设置为在检测出至少两个所述音源的情况下,确定至少两个所述音源分别在所述第一位置所要再现的对象目标声音;合成所述对象目标声音,得到所述目标声音;在所述第一位置上再现所述目标声音。
  16. 根据权利要求15所述的装置,其中,所述第二再现模块包括以下至少之一:
    合成子模块,设置为按照预先配置的比例合成所述对象目标声音,得到所述目标声音;
    第二获取子模块,设置为从所述对象目标声音中按照预先配置的优先级获取所述目标声音;
    第三获取子模块,设置为从所述对象目标声音中随机获取所述目标声音。
  17. 根据权利要求10所述的装置,其中,所述检测单元包括以下至少之一:
    第一检测模块,设置为检测所述第一虚拟对象是否执行声音触发动作,其中,所述声音触发动作用于生成所述声音触发事件;
    第二检测模块,设置为检测与所述第一虚拟对象交互的第二虚拟 对象是否触发所述声音触发事件,其中,所述第二虚拟对象受所述第一虚拟对象控制;
    第三检测模块,设置为检测第三虚拟对象是否触发所述声音触发事件,其中,用于控制所述第三虚拟对象的第四虚拟对象与所述第一虚拟对象为在所述虚拟场景中的关联对象;
    第四检测模块,设置为检测所述第一虚拟对象当前所处的虚拟环境中是否包括环境声音触发对象,其中,所述环境声音触发对象用于按照预定周期触发所述声音触发事件。
  18. 根据权利要求10所述的装置,其中,所述装置还包括:
    第二配置单元,设置为在所述检测声音触发事件之前,为所述虚拟场景中所包括的虚拟对象配置音效,其中,所述音效与所述虚拟对象的属性关联,所述虚拟对象在执行触发操作后将生成所述声音触发事件。
  19. 一种存储介质,所述存储介质中存储有计算机程序,其中,所述计算机程序被设置为运行时执行所述权利要求1至9任一项中所述的方法。
  20. 一种电子装置,包括存储器和处理器,所述存储器中存储有计算机程序,所述处理器被设置为通过所述计算机程序执行所述权利要求1至9任一项中所述的方法。
PCT/CN2018/117149 2018-02-09 2018-11-23 声音再现方法和装置、存储介质及电子装置 WO2019153840A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP18905673.2A EP3750608A4 (en) 2018-02-09 2018-11-23 SOUND REPRODUCTION PROCESS AND DEVICE, INFORMATION MEDIA AND ELECTRONIC DEVICE
US16/892,054 US11259136B2 (en) 2018-02-09 2020-06-03 Sound reproduction method and apparatus, storage medium, and electronic apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810135960.7A CN108597530B (zh) 2018-02-09 2018-02-09 声音再现方法和装置、存储介质及电子装置
CN201810135960.7 2018-02-09

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/892,054 Continuation US11259136B2 (en) 2018-02-09 2020-06-03 Sound reproduction method and apparatus, storage medium, and electronic apparatus

Publications (1)

Publication Number Publication Date
WO2019153840A1 true WO2019153840A1 (zh) 2019-08-15

Family

ID=63608681

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/117149 WO2019153840A1 (zh) 2018-02-09 2018-11-23 声音再现方法和装置、存储介质及电子装置

Country Status (4)

Country Link
US (1) US11259136B2 (zh)
EP (1) EP3750608A4 (zh)
CN (1) CN108597530B (zh)
WO (1) WO2019153840A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111714889A (zh) * 2020-06-19 2020-09-29 网易(杭州)网络有限公司 一种声源控制的方法、装置、计算机设备和介质

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10930038B2 (en) 2014-06-10 2021-02-23 Lab Of Misfits Ar, Inc. Dynamic location based digital element
US10026226B1 (en) * 2014-06-10 2018-07-17 Ripple Inc Rendering an augmented reality object
CN108597530B (zh) 2018-02-09 2020-12-11 腾讯科技(深圳)有限公司 声音再现方法和装置、存储介质及电子装置
CN109529335B (zh) * 2018-11-06 2022-05-20 Oppo广东移动通信有限公司 游戏角色音效处理方法、装置、移动终端及存储介质
CN109597481B (zh) * 2018-11-16 2021-05-04 Oppo广东移动通信有限公司 Ar虚拟人物绘制方法、装置、移动终端及存储介质
EP3712788A1 (en) * 2019-03-19 2020-09-23 Koninklijke Philips N.V. Audio apparatus and method therefor
CN110189764B (zh) * 2019-05-29 2021-07-06 深圳壹秘科技有限公司 展示分离角色的系统、方法和录音设备
CN110270094A (zh) * 2019-07-17 2019-09-24 珠海天燕科技有限公司 一种游戏中音频控制的方法及装置
CN110898430B (zh) * 2019-11-26 2021-12-07 腾讯科技(深圳)有限公司 音源定位方法和装置、存储介质及电子装置
US20220222723A1 (en) * 2021-01-12 2022-07-14 Inter Ikea Systems B.V. Product quality inspection system
CN113398590B (zh) * 2021-07-14 2024-04-30 网易(杭州)网络有限公司 声音处理方法、装置、计算机设备及存储介质
CN114035764A (zh) * 2021-11-05 2022-02-11 郑州捷安高科股份有限公司 一种三维声效的模拟方法、装置、设备及存储介质
CN114176623B (zh) * 2021-12-21 2023-09-12 深圳大学 声音降噪方法、系统、降噪设备及计算机可读存储介质
CN114049871A (zh) * 2022-01-13 2022-02-15 腾讯科技(深圳)有限公司 基于虚拟空间的音频处理方法、装置和计算机设备
CN114504820A (zh) * 2022-02-14 2022-05-17 网易(杭州)网络有限公司 游戏中的音频处理方法、装置、存储介质和电子装置
CN114721512B (zh) * 2022-03-18 2023-06-13 北京城市网邻信息技术有限公司 终端交互方法、装置、电子设备及存储介质
CN116704085B (zh) * 2023-08-08 2023-11-24 安徽淘云科技股份有限公司 虚拟形象生成方法、装置、电子设备和存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101472652A (zh) * 2006-06-16 2009-07-01 科乐美数码娱乐株式会社 游戏声音输出装置、游戏声音控制方法、信息记录介质及程序
CN103096134A (zh) * 2013-02-08 2013-05-08 广州博冠信息科技有限公司 一种基于视频直播和游戏的数据处理方法和设备
CN105879390A (zh) * 2016-04-26 2016-08-24 乐视控股(北京)有限公司 虚拟现实游戏处理方法及设备
CN106843801A (zh) * 2017-01-10 2017-06-13 福建省天奕网络科技有限公司 音效的拟合方法及其系统
CN107360494A (zh) * 2017-08-03 2017-11-17 北京微视酷科技有限责任公司 一种3d音效处理方法、装置、系统及音响系统
CN108597530A (zh) * 2018-02-09 2018-09-28 腾讯科技(深圳)有限公司 声音再现方法和装置、存储介质及电子装置

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002282538A (ja) * 2001-01-19 2002-10-02 Sony Computer Entertainment Inc 音声制御プログラム、音声制御プログラムを記録したコンピュータ読み取り可能な記憶媒体、音声制御プログラムを実行するプログラム実行装置、音声制御装置及び音声制御方法
US7338373B2 (en) * 2002-12-04 2008-03-04 Nintendo Co., Ltd. Method and apparatus for generating sounds in a video game
US20040235545A1 (en) * 2003-05-20 2004-11-25 Landis David Alan Method and system for playing interactive game
JP3977405B1 (ja) * 2006-03-13 2007-09-19 株式会社コナミデジタルエンタテインメント ゲーム音出力装置、ゲーム音制御方法、および、プログラム
JP2009213559A (ja) * 2008-03-07 2009-09-24 Namco Bandai Games Inc ゲーム装置
CN101384105B (zh) * 2008-10-27 2011-11-23 华为终端有限公司 三维声音重现的方法、装置及系统
JP5036797B2 (ja) * 2009-12-11 2012-09-26 株式会社スクウェア・エニックス 発音処理装置、発音処理方法、及び発音処理プログラム
JP6243595B2 (ja) * 2012-10-23 2017-12-06 任天堂株式会社 情報処理システム、情報処理プログラム、情報処理制御方法、および情報処理装置
JP6147486B2 (ja) * 2012-11-05 2017-06-14 任天堂株式会社 ゲームシステム、ゲーム処理制御方法、ゲーム装置、および、ゲームプログラム
CN107115672A (zh) * 2016-02-24 2017-09-01 网易(杭州)网络有限公司 游戏音频资源播放方法、装置及游戏系统
CN107469354B (zh) * 2017-08-30 2018-06-22 网易(杭州)网络有限公司 补偿声音信息的视觉方法及装置、存储介质、电子设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101472652A (zh) * 2006-06-16 2009-07-01 科乐美数码娱乐株式会社 游戏声音输出装置、游戏声音控制方法、信息记录介质及程序
CN103096134A (zh) * 2013-02-08 2013-05-08 广州博冠信息科技有限公司 一种基于视频直播和游戏的数据处理方法和设备
CN105879390A (zh) * 2016-04-26 2016-08-24 乐视控股(北京)有限公司 虚拟现实游戏处理方法及设备
CN106843801A (zh) * 2017-01-10 2017-06-13 福建省天奕网络科技有限公司 音效的拟合方法及其系统
CN107360494A (zh) * 2017-08-03 2017-11-17 北京微视酷科技有限责任公司 一种3d音效处理方法、装置、系统及音响系统
CN108597530A (zh) * 2018-02-09 2018-09-28 腾讯科技(深圳)有限公司 声音再现方法和装置、存储介质及电子装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3750608A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111714889A (zh) * 2020-06-19 2020-09-29 网易(杭州)网络有限公司 一种声源控制的方法、装置、计算机设备和介质

Also Published As

Publication number Publication date
CN108597530B (zh) 2020-12-11
EP3750608A4 (en) 2021-11-10
US11259136B2 (en) 2022-02-22
CN108597530A (zh) 2018-09-28
EP3750608A1 (en) 2020-12-16
US20200296532A1 (en) 2020-09-17

Similar Documents

Publication Publication Date Title
WO2019153840A1 (zh) 声音再现方法和装置、存储介质及电子装置
US11514653B1 (en) Streaming mixed-reality environments between multiple devices
US10424077B2 (en) Maintaining multiple views on a shared stable virtual space
TWI468734B (zh) 用於在共享穩定虛擬空間維持多個視面的方法、攜帶式裝置以及電腦程式
US9956487B2 (en) Variable audio parameter setting
US9724608B2 (en) Computer-readable storage medium storing information processing program, information processing device, information processing system, and information processing method
US9744459B2 (en) Computer-readable storage medium storing information processing program, information processing device, information processing system, and information processing method
US20200228911A1 (en) Audio spatialization
CN112402943A (zh) 加入或重放来自游戏广播的游戏实例
WO2022267729A1 (zh) 基于虚拟场景的互动方法、装置、设备、介质及程序产品
CN111467804A (zh) 一种游戏中的受击处理方法和装置
US20100303265A1 (en) Enhancing user experience in audio-visual systems employing stereoscopic display and directional audio
WO2023151283A1 (zh) 游戏中的音频处理方法、装置、存储介质和电子装置
US20210322880A1 (en) Audio spatialization
CN116096466A (zh) 用于指导用户玩游戏的系统和方法
JP6198375B2 (ja) ゲームプログラムおよびゲームシステム
US11890548B1 (en) Crowd-sourced esports stream production
JP7071649B2 (ja) 音声制御プログラム、および音声制御装置
WO2024055811A1 (zh) 消息显示方法、装置、设备、介质及程序产品
WO2024078324A1 (zh) 虚拟对象的控制方法和装置、存储介质及电子设备
JP2024041359A (ja) ゲームプログラム、およびゲーム装置
JP2022034160A (ja) 音声再生プログラムおよび音声再生装置
CN117046104A (zh) 游戏中的交互方法、装置、电子设备及可读存储介质
CN116832446A (zh) 虚拟角色控制方法、装置及电子设备
CN115779441A (zh) 增益虚拟物品发送方法、装置、移动终端和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18905673

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018905673

Country of ref document: EP

Effective date: 20200909