CN112882568A - Audio playing method and device, electronic equipment and storage medium - Google Patents

Audio playing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112882568A
CN112882568A CN202110113161.1A CN202110113161A CN112882568A CN 112882568 A CN112882568 A CN 112882568A CN 202110113161 A CN202110113161 A CN 202110113161A CN 112882568 A CN112882568 A CN 112882568A
Authority
CN
China
Prior art keywords
audio
determining
sound source
location
virtual sound
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110113161.1A
Other languages
Chinese (zh)
Inventor
蓝斌
张凯
王子彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen TetrasAI Technology Co Ltd
Original Assignee
Shenzhen TetrasAI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen TetrasAI Technology Co Ltd filed Critical Shenzhen TetrasAI Technology Co Ltd
Priority to CN202110113161.1A priority Critical patent/CN112882568A/en
Publication of CN112882568A publication Critical patent/CN112882568A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/162Interface to dedicated audio devices, e.g. audio drivers, interface to CODECs

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Stereophonic System (AREA)

Abstract

The present disclosure relates to an audio playing method and apparatus, an electronic device, and a storage medium, wherein the method includes: determining a first position of a virtual sound source and a second position of AR equipment in an Augmented Reality (AR) scene; determining transmission parameters for sound to travel from the first location to the second location; and adjusting the audio frequency emitted by the virtual sound source according to the transmission parameters, and playing the adjusted audio frequency through the AR equipment. The embodiment of the disclosure can enable the virtual information played by the AR equipment to be better combined with the real environment, so that the user experience is more real, and the user experience is improved.

Description

Audio playing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an audio playing method and apparatus, an electronic device, and a storage medium.
Background
Augmented Reality (AR) technology can combine real world information and virtual world information, apply virtual information to the real world, and be perceived by human senses, thereby achieving sensory experience beyond reality. In the AR technology, a real environment and a virtual object are superimposed in real time on the same screen, and in the screen, the virtual object can be matched with the real environment.
In the AR technology, virtual information is better combined with a real environment, and user experience can be better improved.
Disclosure of Invention
The present disclosure provides a technical solution for audio playing.
According to an aspect of the present disclosure, there is provided an audio playing method including:
determining a first position of a virtual sound source and a second position of AR equipment in an Augmented Reality (AR) scene;
determining transmission parameters for sound to travel from the first location to the second location;
and adjusting the audio frequency emitted by the virtual sound source according to the transmission parameters, and playing the adjusted audio frequency through the AR equipment.
In a possible implementation manner, the adjusting, according to the transmission parameter, the audio emitted by the virtual sound source, and playing, by the AR device, the adjusted audio, includes:
processing a first audio frequency emitted by the virtual sound source according to the transmission parameters to obtain a second audio frequency;
playing the second audio through the AR device.
In a possible implementation manner, the processing the first audio emitted by the virtual sound source according to the transmission parameter to obtain a second audio includes:
and carrying out attenuation processing on the first audio according to the audio attenuation parameter to obtain a second audio.
In one possible implementation, the method further includes:
determining the moment when the virtual sound source emits audio;
the transmission parameters include time delay parameters, the audio frequency sent by the virtual sound source is adjusted, and the adjusted audio frequency is played through the AR equipment, and the method comprises the following steps:
determining the playing time of the audio delay playing according to the time of the virtual sound source sending the audio and the time delay parameter;
and at the playing moment, playing the audio through the AR equipment.
In one possible implementation, the method further includes:
determining a three-dimensional structure in the AR scene;
determining structural parameters and/or types of media in a propagation path of sound from the first location to the second location according to a three-dimensional structure in the AR scene;
the determining transmission parameters for sound propagating from the first location to the second location comprises:
determining transmission parameters for sound to travel from the first location to the second location based on the structural parameters and/or type of the medium.
In a possible implementation manner, the determining, according to the structural parameter and/or the type of the medium, a transmission parameter of sound propagating from the first location to the second location includes:
determining the absorption coefficient of the medium to sound according to the type of the medium; and determining audio attenuation parameters according to the absorption coefficient and the structural parameters.
In one possible implementation, the determining, according to the type of the medium, transmission parameters of sound propagating from the first location to the second location includes:
determining the propagation speed of sound in the medium according to the type of the medium; and determining a time delay parameter according to the propagation speed and the distance of the propagation path.
In one possible implementation, the method further includes:
determining a first orientation of the AR device and a second orientation of the virtual sound source;
determining an included angle between the first orientation and the second orientation;
the determining transmission parameters for sound propagating from the first location to the second location comprises:
and determining transmission parameters of sound transmitted from the first position to the second position according to the included angle.
In one possible implementation, the virtual sound source includes a virtual object capable of emitting audio in the AR scene.
In one possible implementation, the method further includes:
detecting a location of the virtual sound source and/or the AR device;
in response to detecting that the position of the virtual sound source and/or the AR device changes, adjusting the audio emitted by the virtual sound source based on the changed position of the virtual sound source and/or the AR device.
According to an aspect of the present disclosure, there is provided an audio playback apparatus including:
a position determining unit, configured to determine a first position of a virtual sound source and a second position of an AR device in an augmented reality AR scene;
a parameter determination unit for determining transmission parameters of sound propagating from the first location to the second location;
and the adjusting unit is used for adjusting the audio emitted by the virtual sound source according to the transmission parameters and playing the adjusted audio through the AR equipment.
In a possible implementation manner, the adjusting unit is configured to process a first audio emitted by the virtual sound source according to the transmission parameter to obtain a second audio; playing the second audio through the AR device.
In a possible implementation manner, the transmission parameter includes an audio attenuation parameter, and the adjusting unit is configured to perform attenuation processing on the first audio according to the audio attenuation parameter to obtain a second audio.
In one possible implementation, the apparatus further includes:
a time determining unit for determining a time at which the virtual sound source emits audio;
the transmission parameters comprise time delay parameters, and the adjusting unit is used for determining the playing time of the audio delay playing according to the time of the virtual sound source emitting the audio and the time delay parameters; and at the playing moment, playing the audio through the AR equipment.
In one possible implementation, the apparatus further includes:
a three-dimensional structure determination unit for determining a three-dimensional structure in the AR scene;
a medium determining unit, configured to determine a structural parameter and/or a type of a medium in a propagation path of the sound from the first location to the second location according to a three-dimensional structure in the AR scene;
a parameter determination unit for determining transmission parameters of sound propagating from the first location to the second location according to the structural parameters and/or type of the medium.
In a possible implementation manner, the parameter determining unit is configured to determine an absorption coefficient of the medium to sound according to a type of the medium; and determining audio attenuation parameters according to the absorption coefficient and the structural parameters.
In a possible implementation manner, the parameter determining unit is configured to determine a propagation speed of sound in the medium according to a type of the medium; and determining a time delay parameter according to the propagation speed and the distance of the propagation path.
In one possible implementation, the apparatus further includes:
an orientation determination unit for determining a first orientation of the AR device and a second orientation of the virtual sound source;
the included angle determining unit is used for determining an included angle between the first orientation and the second orientation;
and the parameter determining unit is used for determining transmission parameters of sound transmitted from the first position to the second position according to the included angle.
In one possible implementation, the virtual sound source includes a virtual object capable of emitting audio in the AR scene.
In one possible implementation, the apparatus further includes:
a position detection unit for detecting a position of the virtual sound source and/or the AR device;
and the audio adjusting unit is used for responding to the detection that the position of the virtual sound source and/or the AR equipment is changed, and adjusting the audio emitted by the virtual sound source based on the changed position of the virtual sound source and/or the AR equipment.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
In the embodiment of the disclosure, by determining a first position of a virtual sound source and a second position of an AR device in an AR scene, and then determining a transmission parameter of sound transmitted from the first position to the second position, according to the transmission parameter, an audio frequency emitted by the virtual sound source is adjusted, and the adjusted audio frequency is played by the AR device. Therefore, the position between the virtual sound source and the AR equipment is considered for the audio frequency in the AR scene played by the AR equipment, the audio frequency sent by the virtual sound source is adjusted according to the transmission parameters of the sound in the process of spreading from the virtual sound source to the AR equipment, the virtual information (the adjusted audio frequency) played by the AR equipment is better combined with the real environment (the first position and the second position), the audio frequency played by the AR equipment can change according to the change of the position between the virtual sound source and the AR equipment, the user experience is more real, and the user experience is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flowchart of an audio playing method according to an embodiment of the present disclosure.
Fig. 2 shows a block diagram of an audio playback device according to an embodiment of the present disclosure;
FIG. 3 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure;
fig. 4 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Augmented reality, also known as mixed reality, is an emerging technology that has been developed based on virtual reality, where a computer-generated virtual scene can be superimposed on the real environment seen by the user.
Augmented reality technology superimposes computer-generated virtual object or system prompt information onto a real scene, thereby realizing 'augmented' reality. It superimposes computer-generated virtual objects or information about real objects onto the scene of the real world, achieving an enhancement to the real world.
In the embodiment of the disclosure, not only the combination of virtual "scene" and real scene is considered, but also the combination of virtual "sound" and real scene is considered, and the audio frequency emitted by the virtual sound source is adjusted according to the transmission parameter of the sound in the process of transmitting the sound from the virtual sound source to the AR device, so that the virtual information (adjusted audio frequency) played by the AR device is better combined with the real environment (first position and second position), the audio frequency played by the AR device changes according to the change of the position between the virtual sound source and the AR device, and the user experience is more real.
The audio playing method provided by the embodiment of the present disclosure may be executed by an electronic device such as a terminal device or a server, where the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, and the like, and the method may be implemented by a processor calling a computer-readable instruction stored in a memory. Alternatively, the method may be performed by a server.
Fig. 1 shows a flowchart of an audio playing method according to an embodiment of the present disclosure, as shown in fig. 1, the audio playing method includes:
in step S11, a first location of a virtual sound source and a second location of an AR device in an augmented reality AR scene are determined.
In the AR technology, a scene model is constructed by spatially reconstructing a geographic object in a real scene, and the scene model is used to represent a three-dimensional structure in the real scene. The three-dimensional scene model may be constructed by a three-dimensional reconstruction technique, for example, a Structure From Motion (SFM) spatial reconstruction technique.
In the process of displaying the virtual object in the real scene through the visual interface, the constructed scene model is matched with the real scene collected by the AR equipment in real time, and after the matching is successful, the virtual object arranged at a certain position in the scene model can be displayed at the corresponding position in the real scene collected by the AR equipment in real time, so that the combination of the virtual object and the real scene is realized.
After the scene model is successfully matched with the real scene, the position of the AR device shooting the real scene in the scene model, namely the second position, can be determined. When the position of the AR device changes, the AR device may also be located in real time through a tracking technology, for example, the position of the AR device may be tracked and located through a Six degree of freedom (6 DoF) tracking technology.
And the first position of the virtual sound source is the placing position of the virtual sound source in the AR scene in the scene model. In one possible implementation, the virtual sound source comprises a virtual object capable of emitting audio in an AR scene. Such as virtual small animals, virtual fireworks, virtual speakers, etc.
It should be noted that the first position and the second position are positions in the scene model, and since the scene model is constructed according to the real scene, the first position and the second position are positions in the simulated real scene actually, and therefore the first position and the second position can reflect the real scene in the AR scene.
In step S12, transmission parameters for sound to travel from the first location to the second location are determined.
Sound is transmitted in a medium, and the property of the sound, such as the amplitude, frequency, transmission speed, etc., is changed by the medium during the transmission process. The transmission parameters include parameters that can cause sound changes during sound propagation.
In the process of transmitting sound from the first location to the second location, the sound may be affected by the medium in the transmission path, for example, may be affected by the shape and material of the medium, and various specific ways of determining the transmission parameter may be provided, which may be referred to in the related description of possible implementation manners provided by the present disclosure, and are not repeated herein.
In step S13, the audio generated by the virtual sound source is adjusted according to the transmission parameter, and the adjusted audio is played through the AR device.
In the adjustment process, the amplitude, frequency, playing time, and the like of the audio emitted by the virtual sound source may be changed, and specific adjustment manners may be various, which may be referred to in the related description of possible implementation manners provided in the present disclosure, and are not described herein again.
In the embodiment of the disclosure, by determining a first position of a virtual sound source and a second position of an AR device in an AR scene, and then determining a transmission parameter of sound transmitted from the first position to the second position, according to the transmission parameter, an audio frequency emitted by the virtual sound source is adjusted, and the adjusted audio frequency is played by the AR device. Therefore, the position between the virtual sound source and the AR equipment is considered for the audio frequency in the AR scene played by the AR equipment, the audio frequency sent by the virtual sound source is adjusted according to the transmission parameters of the sound in the process of spreading from the virtual sound source to the AR equipment, the virtual information (the adjusted audio frequency) played by the AR equipment is better combined with the real environment (the first position and the second position), the audio frequency played by the AR equipment can change according to the change of the position between the virtual sound source and the AR equipment, the user experience is more real, and the user experience is improved.
The audio playing method provided by the embodiment of the present disclosure may also have multiple implementation manners, and in one possible implementation manner, the adjusting the audio emitted by the virtual sound source according to the transmission parameter, and playing the adjusted audio through the AR device includes: processing a first audio frequency emitted by the virtual sound source according to the transmission parameters to obtain a second audio frequency; playing the second audio through the AR device.
Here, the processing the first audio includes: and modifying the sound characteristics of the first audio according to the transmission parameters, wherein the sound characteristics can comprise at least one of the following: loudness, pitch, timbre.
In the AR scene, the sound emitted by the virtual sound source exists in the form of audio in the computing device, for example, in the AR scene, the virtual sound source is fireworks, and the sound emitted by the fireworks is embodied by playing the audio in the computing device, so that here, the sound emitted by the virtual sound source is changed by processing the first audio to obtain the second audio.
Specifically, loudness is the size of sound, and is determined by the amplitude of the sound, the larger the amplitude, the larger the loudness. After the virtual sound source emits the first audio, the first audio can be transmitted to the second position where the AR device is located through the medium, the loudness of the AR device is weakened, and therefore the loudness of the played sound can be adjusted by adjusting the amplitude of the first audio through the transmission parameters.
The pitch is the level of sound and is determined by the frequency (the number of times the vibration is completed per unit time). After the virtual sound source emits the first audio, the first audio can be transmitted to the second position where the AR device is located through the medium, and the frequency of the AR device can be influenced by the medium, so that the frequency of the first audio can be adjusted through the transmission parameters, and the tone of the played sound can be adjusted.
In addition, other sound characteristics in the first audio may also be modified, which is not limited by this disclosure.
In the embodiment of the disclosure, the first audio sent by the virtual sound source is processed according to the transmission parameter to obtain the second audio, and then the second audio is played through the AR device, so that the audio played by the AR device is the audio processed based on the transmission parameter, and the transmission parameter is determined according to the first position and the second position, so that the virtual information (the audio after adjustment) played by the AR device can be better combined with the real environment (the first position and the second position), the audio played by the AR device can change according to the change of the position between the virtual sound source and the AR device, the user experience is more real, and the user experience is improved.
In the embodiment of the present disclosure, there may be multiple specific implementation manners for determining the transmission parameter, and in one possible implementation manner, the method further includes: determining a three-dimensional structure in the AR scene; determining structural parameters and/or types of media in a propagation path of sound from the first location to the second location according to a three-dimensional structure in the AR scene; the determining transmission parameters for sound propagating from the first location to the second location comprises: determining transmission parameters for sound to travel from the first location to the second location based on the structural parameters and/or type of the medium.
The three-dimensional structure here represents the three-dimensional structure of the real scene and can be obtained by the three-dimensional reconstruction technique described above, according to which the structure of the medium between the first location and the second location can be determined.
The type of structure here may be the type of material constituting the structure, for example, steel, wood, concrete, plastic, etc.; or it may be the type of thing to which the structure belongs, e.g., wall, tree, vehicle, etc. The type of the structure may be a predetermined type, or may be identified by a neural network.
In the embodiment of the disclosure, because the sound is mainly influenced by the structure and the type of the medium in the medium transmission process, the transmission parameter can be determined according to the structure parameter and the type of the medium, so as to accurately obtain the transmission parameter of the sound transmitted from the first position to the second position, and thus, the audio frequency of the virtual sound source adjusted by the transmission parameter is closer to the sound transmitted through the real environment, the user experience is more real, and the user experience is improved.
In a possible implementation manner, the determining, according to the structural parameter and the type of the medium, a transmission parameter of sound propagating from the first location to the second location includes: determining the absorption coefficient of the medium to sound according to the type of the medium; and determining audio attenuation parameters according to the absorption coefficient and the structural parameters.
In one possible implementation, the determining, according to the type of the medium, transmission parameters of sound propagating from the first location to the second location includes: determining the propagation speed of sound in the medium according to the type of the medium; and determining a time delay parameter according to the propagation speed and the distance of the propagation path.
The audio attenuation parameter can be determined from both absorption attenuation and scattering attenuation, wherein the absorption attenuation is mainly influenced by the absorption coefficient of the medium to the sound, and the scattering attenuation is mainly influenced by the structure of the medium.
The absorption coefficient of the medium for sound, also referred to as absorption constant, belongs to the characteristics of the medium itself, and is often changed depending on the pressure, temperature, and the like depending on the material used for the medium, and the specific absorption coefficient may be determined experimentally, or a known absorption coefficient that has been determined may be used. While the structure of the medium can be obtained based on three-dimensional reconstruction techniques.
After the absorption coefficient and the structural parameter of the medium are obtained, the audio attenuation parameter can be determined according to the absorption coefficient and the structural parameter, specifically, the sound wave part of the original sound reflected and scattered by the medium and the sound wave part entering the medium can be determined according to the structural parameter of the medium, and the sound wave part entering the medium is attenuated by the absorption coefficient of the medium to obtain the sound wave part transmitted from the medium. The audio attenuation parameter may then be the ratio of the original sound to the portion of the sound wave transmitted.
It should be noted that, because sound often has a plurality of frequency bands, the audio attenuation parameter herein may also be a set of values for different frequency bands, and each frequency band has a corresponding audio attenuation parameter. In addition, the attenuation parameters of each frequency band may be different, mainly due to different losses when the sound waves of different frequencies propagate in the medium.
Then, in a case that the transmission parameter includes an audio attenuation parameter, in a possible implementation manner, the processing the first audio emitted by the virtual sound source according to the transmission parameter to obtain a second audio includes: and carrying out attenuation processing on the first audio according to the audio attenuation parameter to obtain a second audio.
After obtaining the audio attenuation parameter, the first audio may be attenuated by the audio attenuation parameter, for example, in the case that the audio attenuation parameter is a coefficient, the amplitude in the audio may be multiplied by the audio attenuation coefficient to obtain the second audio.
In the embodiment of the disclosure, the audio attenuation parameter is obtained according to the absorption coefficient and the structural parameter of the medium to the sound, so as to determine the attenuation condition of the sound transmitted from the first position to the second position, and thus, the audio frequency of the virtual sound source adjusted by the audio attenuation parameter is closer to the sound transmitted in the real environment, the user experience is more real, and the user experience is improved.
In addition, in a possible implementation manner, the propagation speed of sound in the medium can be determined according to the type of the medium, and then the time delay parameter is determined according to the propagation speed and the distance of the propagation path. The time delay parameter can be indicative of the length of time it takes for sound to travel from a first location to a second location, and may be, in particular, a ratio of distance to travel speed.
Then, in a possible implementation, in case that the transmission parameter comprises a time delay parameter, the method further comprises: determining the moment when the virtual sound source emits audio; adjusting the audio frequency that virtual sound source sent, and through AR equipment plays the audio frequency after the adjustment, include: determining the playing time of the audio delay playing according to the time of the virtual sound source sending the audio and the time delay parameter; and at the playing moment, playing the audio through the AR equipment.
The time when the virtual sound source emits the audio may be a time when a virtual sound source sound emitting operation is performed, for example, a time when a virtual animal opens a mouth to emit a sound, or a time when a virtual firework blooms.
Under the condition that the time delay parameter is the time length consumed by the sound to propagate from the first position to the second position, the time delay parameter is added on the basis of the time of the virtual sound source for emitting the audio, and the playing time of the audio delay playing can be obtained. And then playing the audio through the AR equipment at the playing moment.
In the embodiment of the disclosure, the time delay parameter is determined according to the propagation speed and the propagation distance of the medium in the sound, so that the audio frequency of the virtual sound source after the transmission parameter adjustment is closer to the sound propagated in the real environment, the user experience is more real, and the user experience is improved.
In one possible implementation, the method further includes: determining a first orientation of the AR device and a second orientation of the virtual sound source; determining an included angle between the first orientation and the second orientation; the determining transmission parameters for sound propagating from the first location to the second location comprises: and determining transmission parameters of sound transmitted from the first position to the second position according to the included angle.
Considering that the virtual sound source is displayed in the screen of the AR device, the first orientation of the AR device here may be the direction of image display of the AR device, for example, the orientation of the front of the cell phone screen. Further, the first orientation may also be the opposite direction of the image display direction of the AR device. Since the user is a virtual object which is perceived and displayed in the image display direction of the AR device, the opposite direction of the image display direction of the AR device can be regarded as the direction of the user's visual angle, the transmission parameter is determined according to the direction, the audio emitted by the virtual sound source is adjusted, and the user experience is better.
The orientation of the virtual sound source here may be the orientation of a part of the virtual object from which sound is emitted, for example, the orientation of the mouth of a virtual animal. In the case where the loudness of the sound emitted by the virtual sound source is the same at each angle, it is determined that the virtual sound source is oriented toward the AR device, for example, for a virtual firework, the loudness of the sound emitted in each direction of the firework is the same.
Since the magnitude of loudness is mainly affected by the first orientation of the AR device. Then, in a case where the first orientation of the AR device is defined as the image display direction of the AR device, when an angle between the first orientation of the AR device and the second orientation of the virtual sound source is 0 degrees, that is, it indicates that the second orientation of the virtual sound is directly facing the user, the sound heard by the user may be set to be the maximum, the transmission parameter may be a coefficient greater than 0 and smaller than 1, and the transmission parameter portion determined according to the angle may be set to be 1. The sound heard by the user is gradually reduced along with the increase of the included angle, and in the range of the absolute value of the included angle between 0 and 180 degrees, the loudness of the sound is in direct proportion to the included angle, so that the transmission parameter is in inverse proportion to the included angle; in the range of the absolute value of the included angle of 180 degrees to 360 degrees, the loudness of sound is inversely proportional to the included angle, so the transmission parameter is directly proportional to the included angle.
In the embodiment of the disclosure, because the orientation of the AR device can represent the orientation of the user, the transmission parameter of the sound transmitted from the first position to the second position is determined by the included angle between the orientation of the AR device and the orientation of the sound source, so that the audio frequency of the virtual sound source adjusted by the transmission parameter is closer to the sound transmitted in the real environment, the user experience is more real, and the user experience is improved.
In one possible implementation, the method further includes: detecting a location of the virtual sound source and/or the AR device; in response to detecting that the position of the virtual sound source and/or the AR device changes, adjusting the audio emitted by the virtual sound source based on the changed position of the virtual sound source and/or the AR device.
The detection of the position of the virtual sound source and/or the AR device may be in real time, or may be performed according to a certain frequency, and when the position of the virtual sound source and/or the AR device is detected to be changed, the audio frequency emitted by the virtual sound source may be adjusted based on the changed position of the virtual sound source and/or the AR device, and the specific adjustment manner may be referred to in the related description of the present disclosure, which is not described herein again.
In the embodiment of the disclosure, under the condition that the position of the virtual sound source and/or the AR device is detected to change, the audio frequency sent by the virtual sound source is adjusted based on the position of the virtual sound source and/or the AR device after the change, so that the virtual information (the adjusted audio frequency) played by the AR device is better combined with the real condition, the audio frequency played by the AR device changes according to the change of the position between the virtual sound source and the AR device, the user experience is more real, and the user experience is improved.
The following describes in detail an audio playing method in an AR scene with reference to a possible implementation manner of the present disclosure, and the specific process of the implementation manner is as follows:
step S201, determining a first position of a virtual sound source and a second position of AR equipment in an AR scene;
step S202, determining structural parameters and types of media in a propagation path of sound from a first position to a second position according to a three-dimensional structure in an AR scene;
step S203, determining an included angle between a first orientation of the AR equipment and a second orientation of the virtual sound source;
step S204, determining audio attenuation parameters of sound transmitted from a first position to a second position according to the determined included angle and the structure parameters and types of the medium;
step S205, according to the determined audio attenuation parameter, carrying out attenuation processing on a first audio frequency emitted by the virtual sound source to obtain a second audio frequency;
step S206, determining the propagation speed of sound in the medium according to the type of the medium, and determining a time delay parameter according to the propagation speed and the distance of the propagation path;
step S207, determining the playing time of the delayed playing of the second audio according to the time when the virtual sound source emits the first audio and the time delay parameter;
and step S208, playing the second audio through the AR equipment at the determined playing time.
In the embodiment of the disclosure, the position between the virtual sound source and the AR device is considered for the audio frequency in the AR scene played by the AR device, and the audio frequency sent by the virtual sound source is adjusted by determining the transmission parameter in the process of transmitting the sound from the virtual sound source to the AR device according to the medium and the three-dimensional structure between the first position and the second position, so that the virtual information (adjusted audio frequency) played by the AR device is better combined with the real environment (the first position and the second position), the audio frequency played by the AR device changes according to the change of the position between the virtual sound source and the AR device, the user experience is more real, and the user experience is improved.
It is understood that the above-mentioned embodiments of the method may be combined with each other to form a combined embodiment without departing from the logic principle, and the disclosure is not repeated herein, for example, the audio attenuation parameter may be determined according to one or more of the included angle between the first orientation and the second orientation, the type of the medium, and the structure. Those skilled in the art will appreciate that in the above methods of the specific embodiments, the specific order of execution of the steps should be determined by their function and possibly their inherent logic.
In addition, the present disclosure also provides an audio playing apparatus, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any audio playing method provided by the present disclosure, and the corresponding technical solutions and descriptions and corresponding descriptions in the method section are not repeated.
Fig. 2 shows a block diagram of an audio playback apparatus according to an embodiment of the present disclosure, and as shown in fig. 2, the apparatus 20 includes:
a position determining unit 21, configured to determine a first position of a virtual sound source and a second position of an AR device in an augmented reality AR scene;
a parameter determination unit 22 for determining transmission parameters of sound propagating from the first location to the second location;
and the adjusting unit 23 is configured to adjust the audio emitted by the virtual sound source according to the transmission parameter, and play the adjusted audio through the AR device.
In a possible implementation manner, the adjusting unit 23 is configured to process a first audio emitted by the virtual sound source according to the transmission parameter to obtain a second audio; playing the second audio through the AR device.
In a possible implementation manner, the transmission parameter includes an audio attenuation parameter, and the adjusting unit 23 is configured to perform attenuation processing on the first audio according to the audio attenuation parameter to obtain a second audio.
In one possible implementation, the apparatus further includes:
a time determining unit for determining a time at which the virtual sound source emits audio;
the transmission parameters include a time delay parameter, and the adjusting unit 23 is configured to determine a playing time of the audio delay playing according to a time when the virtual sound source emits an audio and the time delay parameter; and at the playing moment, playing the audio through the AR equipment.
In one possible implementation, the apparatus further includes:
a three-dimensional structure determination unit for determining a three-dimensional structure in the AR scene;
a medium determining unit, configured to determine a structural parameter and/or a type of a medium in a propagation path of the sound from the first location to the second location according to a three-dimensional structure in the AR scene;
a parameter determining unit 22 for determining a transmission parameter of the sound propagating from the first location to the second location according to a structural parameter and/or a type of the medium.
In a possible implementation manner, the parameter determining unit 22 is configured to determine an absorption coefficient of the medium for sound according to a type of the medium; and determining audio attenuation parameters according to the absorption coefficient and the structural parameters.
In a possible implementation manner, the parameter determining unit 22 is configured to determine a propagation speed of sound in the medium according to a type of the medium; and determining a time delay parameter according to the propagation speed and the distance of the propagation path.
In one possible implementation, the apparatus further includes:
an orientation determination unit for determining a first orientation of the AR device and a second orientation of the virtual sound source;
the included angle determining unit is used for determining an included angle between the first orientation and the second orientation;
the parameter determining unit 22 is configured to determine, according to the included angle, a transmission parameter of sound transmitted from the first location to the second location.
In one possible implementation, the virtual sound source includes a virtual object capable of emitting audio in the AR scene.
In one possible implementation, the apparatus further includes:
a position detection unit for detecting a position of the virtual sound source and/or the AR device;
and the audio adjusting unit is used for responding to the detection that the position of the virtual sound source and/or the AR equipment is changed, and adjusting the audio emitted by the virtual sound source based on the changed position of the virtual sound source and/or the AR equipment.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
The disclosed embodiments also provide a computer program product comprising computer readable code, when the computer readable code runs on a device, a processor in the device executes instructions for implementing the audio playing method provided in any of the above embodiments.
The embodiments of the present disclosure also provide another computer program product for storing computer readable instructions, which when executed cause a computer to perform the operations of the audio playing method provided in any of the above embodiments.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 3 illustrates a block diagram of an electronic device 800 in accordance with an embodiment of the disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 3, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as a wireless network (WiFi), a second generation mobile communication technology (2G) or a third generation mobile communication technology (3G), or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
Fig. 4 shows a block diagram of an electronic device 1900 according to an embodiment of the disclosure. For example, the electronic device 1900 may be provided as a server. Referring to fig. 4, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system, such as the Microsoft Server operating system (Windows Server), stored in the memory 1932TM) Apple Inc. of the present application based on the graphic user interface operating System (Mac OS X)TM) Multi-user, multi-process computer operating system (Unix)TM) Free and open native code Unix-like operating System (Linux)TM) Open native code Unix-like operating System (FreeBSD)TM) Or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (13)

1. An audio playing method, comprising:
determining a first position of a virtual sound source and a second position of AR equipment in an Augmented Reality (AR) scene;
determining transmission parameters for sound to travel from the first location to the second location;
and adjusting the audio frequency emitted by the virtual sound source according to the transmission parameters, and playing the adjusted audio frequency through the AR equipment.
2. The method according to claim 1, wherein the adjusting the audio emitted by the virtual sound source according to the transmission parameter and playing the adjusted audio through the AR device comprises:
processing a first audio frequency emitted by the virtual sound source according to the transmission parameters to obtain a second audio frequency;
playing the second audio through the AR device.
3. The method of claim 2, wherein the transmission parameters include audio attenuation parameters, and wherein processing the first audio from the virtual sound source according to the transmission parameters to obtain the second audio comprises:
and carrying out attenuation processing on the first audio according to the audio attenuation parameter to obtain a second audio.
4. The method according to any one of claims 1-3, further comprising:
determining the moment when the virtual sound source emits audio;
the transmission parameters include time delay parameters, the audio frequency sent by the virtual sound source is adjusted, and the adjusted audio frequency is played through the AR equipment, and the method comprises the following steps:
determining the playing time of the audio delay playing according to the time of the virtual sound source sending the audio and the time delay parameter;
and at the playing moment, playing the audio through the AR equipment.
5. The method according to any one of claims 1-4, further comprising:
determining a three-dimensional structure in the AR scene;
determining structural parameters and/or types of media in a propagation path of sound from the first location to the second location according to a three-dimensional structure in the AR scene;
the determining transmission parameters for sound propagating from the first location to the second location comprises:
determining transmission parameters for sound to travel from the first location to the second location based on the structural parameters and/or type of the medium.
6. The method of claim 5, wherein determining transmission parameters for sound to travel from the first location to the second location based on structural parameters and/or types of the medium comprises:
determining the absorption coefficient of the medium to sound according to the type of the medium; and determining audio attenuation parameters according to the absorption coefficient and the structural parameters.
7. The method of claim 5 or 6, wherein determining transmission parameters for sound to travel from the first location to the second location based on the type of medium comprises:
determining the propagation speed of sound in the medium according to the type of the medium; and determining a time delay parameter according to the propagation speed and the distance of the propagation path.
8. The method according to any one of claims 1-7, further comprising:
determining a first orientation of the AR device and a second orientation of the virtual sound source;
determining an included angle between the first orientation and the second orientation;
the determining transmission parameters for sound propagating from the first location to the second location comprises:
and determining transmission parameters of sound transmitted from the first position to the second position according to the included angle.
9. The method of claim 1, wherein the virtual sound source comprises a virtual object in the AR scene that is capable of emitting audio.
10. The method according to any one of claims 1-9, further comprising:
detecting a location of the virtual sound source and/or the AR device;
in response to detecting that the position of the virtual sound source and/or the AR device changes, adjusting the audio emitted by the virtual sound source based on the changed position of the virtual sound source and/or the AR device.
11. An audio playback apparatus, comprising:
a position determining unit, configured to determine a first position of a virtual sound source and a second position of an AR device in an augmented reality AR scene;
a parameter determination unit for determining transmission parameters of sound propagating from the first location to the second location;
and the adjusting unit is used for adjusting the audio emitted by the virtual sound source according to the transmission parameters and playing the adjusted audio through the AR equipment.
12. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the method of any one of claims 1 to 10.
13. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 10.
CN202110113161.1A 2021-01-27 2021-01-27 Audio playing method and device, electronic equipment and storage medium Pending CN112882568A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110113161.1A CN112882568A (en) 2021-01-27 2021-01-27 Audio playing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110113161.1A CN112882568A (en) 2021-01-27 2021-01-27 Audio playing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112882568A true CN112882568A (en) 2021-06-01

Family

ID=76052808

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110113161.1A Pending CN112882568A (en) 2021-01-27 2021-01-27 Audio playing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112882568A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114020235A (en) * 2021-09-29 2022-02-08 北京城市网邻信息技术有限公司 Audio processing method in real scene space, electronic terminal and storage medium
CN114286278A (en) * 2021-12-27 2022-04-05 北京百度网讯科技有限公司 Audio data processing method and device, electronic equipment and storage medium
CN116437282A (en) * 2023-03-23 2023-07-14 合众新能源汽车股份有限公司 Sound sensation processing method of virtual concert, storage medium and electronic equipment
WO2024001884A1 (en) * 2022-06-29 2024-01-04 深圳市中兴微电子技术有限公司 Road condition prompting method, and electronic device and computer-readable medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130236040A1 (en) * 2012-03-08 2013-09-12 Disney Enterprises, Inc. Augmented reality (ar) audio with position and action triggered virtual sound effects
CN107193386A (en) * 2017-06-29 2017-09-22 联想(北京)有限公司 Acoustic signal processing method and electronic equipment
CN109086029A (en) * 2018-08-01 2018-12-25 北京奇艺世纪科技有限公司 A kind of audio frequency playing method and VR equipment
CN110401898A (en) * 2019-07-18 2019-11-01 广州酷狗计算机科技有限公司 Export method, apparatus, equipment and the storage medium of audio data
CN111158459A (en) * 2018-11-07 2020-05-15 辉达公司 Application of geometric acoustics in immersive Virtual Reality (VR)
CN111713091A (en) * 2018-02-15 2020-09-25 奇跃公司 Mixed reality virtual reverberation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130236040A1 (en) * 2012-03-08 2013-09-12 Disney Enterprises, Inc. Augmented reality (ar) audio with position and action triggered virtual sound effects
CN107193386A (en) * 2017-06-29 2017-09-22 联想(北京)有限公司 Acoustic signal processing method and electronic equipment
CN111713091A (en) * 2018-02-15 2020-09-25 奇跃公司 Mixed reality virtual reverberation
CN109086029A (en) * 2018-08-01 2018-12-25 北京奇艺世纪科技有限公司 A kind of audio frequency playing method and VR equipment
CN111158459A (en) * 2018-11-07 2020-05-15 辉达公司 Application of geometric acoustics in immersive Virtual Reality (VR)
CN110401898A (en) * 2019-07-18 2019-11-01 广州酷狗计算机科技有限公司 Export method, apparatus, equipment and the storage medium of audio data

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114020235A (en) * 2021-09-29 2022-02-08 北京城市网邻信息技术有限公司 Audio processing method in real scene space, electronic terminal and storage medium
CN114020235B (en) * 2021-09-29 2022-06-17 北京城市网邻信息技术有限公司 Audio processing method in live-action space, electronic terminal and storage medium
CN114286278A (en) * 2021-12-27 2022-04-05 北京百度网讯科技有限公司 Audio data processing method and device, electronic equipment and storage medium
CN114286278B (en) * 2021-12-27 2024-03-15 北京百度网讯科技有限公司 Audio data processing method and device, electronic equipment and storage medium
WO2024001884A1 (en) * 2022-06-29 2024-01-04 深圳市中兴微电子技术有限公司 Road condition prompting method, and electronic device and computer-readable medium
CN116437282A (en) * 2023-03-23 2023-07-14 合众新能源汽车股份有限公司 Sound sensation processing method of virtual concert, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN112882568A (en) Audio playing method and device, electronic equipment and storage medium
US8170222B2 (en) Augmented reality enhanced audio
CN110198484B (en) Message pushing method, device and equipment
CN111445901B (en) Audio data acquisition method and device, electronic equipment and storage medium
CN110401898B (en) Method, apparatus, device and storage medium for outputting audio data
WO2022188305A1 (en) Information presentation method and apparatus, and electronic device, storage medium and computer program
CN111246227A (en) Bullet screen publishing method and equipment
CN107147936B (en) Display control method and device for barrage
CN111563138B (en) Positioning method and device, electronic equipment and storage medium
WO2022134475A1 (en) Point cloud map construction method and apparatus, electronic device, storage medium and program
WO2020062922A1 (en) Sound effect processing method and related product
CN110989901A (en) Interactive display method and device for image positioning, electronic equipment and storage medium
CN108174269B (en) Visual audio playing method and device
CN112785672A (en) Image processing method and device, electronic equipment and storage medium
CN111010314A (en) Communication test method and device for terminal equipment, routing equipment and storage medium
CN110798327B (en) Message processing method, device and storage medium
CN112950712B (en) Positioning method and device, electronic equipment and storage medium
CN111784773A (en) Image processing method and device and neural network training method and device
CN113835518A (en) Vibration control method and device, vibration device, terminal and storage medium
CN110660403B (en) Audio data processing method, device, equipment and readable storage medium
WO2022193467A1 (en) Sound playing method and apparatus, electronic device, storage medium, and program
CN106598247B (en) Response control method and device based on virtual reality
CN113157097B (en) Sound playing method and device, electronic equipment and storage medium
CN110769311A (en) Method, device and system for processing live data stream
CN112148130A (en) Information processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210601