CN116437282A - Sound sensation processing method of virtual concert, storage medium and electronic equipment - Google Patents

Sound sensation processing method of virtual concert, storage medium and electronic equipment Download PDF

Info

Publication number
CN116437282A
CN116437282A CN202310296647.2A CN202310296647A CN116437282A CN 116437282 A CN116437282 A CN 116437282A CN 202310296647 A CN202310296647 A CN 202310296647A CN 116437282 A CN116437282 A CN 116437282A
Authority
CN
China
Prior art keywords
audio
audience
sound
determining
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310296647.2A
Other languages
Chinese (zh)
Inventor
陈立里
霍百林
张祺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hozon New Energy Automobile Co Ltd
Original Assignee
Hozon New Energy Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hozon New Energy Automobile Co Ltd filed Critical Hozon New Energy Automobile Co Ltd
Priority to CN202310296647.2A priority Critical patent/CN116437282A/en
Publication of CN116437282A publication Critical patent/CN116437282A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/302Electronic adaptation of stereophonic sound system to listener position or orientation

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Stereophonic System (AREA)

Abstract

The invention discloses a sound sensation processing method of a virtual concert, a storage medium and electronic equipment, wherein the method comprises the following steps: receiving a selection instruction, and determining a target position in the virtual concert according to the selection instruction; determining a first audience transmission audio transmitted to the target location; acquiring the sound position of sound in the virtual concert, determining sound frequency, and determining sound transmission frequency from the sound frequency to the target position according to the target position and the sound position; mixing the first audience transmission audio and the sound transmission audio to obtain mixed audio transmitted to the target position. According to the target position selected by the user in the virtual concert, determining the first audience transmission audio and the sound transmission audio transmitted to the target position, and mixing the first audience transmission audio and the sound transmission audio transmitted to the target position, so that the user in the target position can feel the atmosphere at the corresponding position in the real concert, and the user is personally on the scene.

Description

Sound sensation processing method of virtual concert, storage medium and electronic equipment
Technical Field
The invention relates to the technical field of sound sense processing of virtual concert, in particular to a sound sense processing method, a storage medium and electronic equipment of the virtual concert.
Background
With the development of virtual reality, virtual scenes such as virtual concert or virtual ball game are generated. The related virtual concert acoustic sense processing scheme lacks sense of realism and has poor interactivity, so that users in the virtual scene cannot experience the atmosphere of a real concert.
Disclosure of Invention
The present invention aims to solve at least one of the technical problems in the related art to some extent. Therefore, an object of the present invention is to provide a method for processing sound sensation of virtual concert, which increases interactivity, improves reality of the concert, and enables a user to experience atmosphere at a position corresponding to the real concert, so that the user is on the spot.
A second object of the present invention is to propose a computer readable storage medium.
A third object of the present invention is to propose an electronic device.
In order to achieve the above object, an embodiment of a first aspect of the present invention provides a method for processing sound sensation of a virtual concert, the method comprising: receiving a selection instruction, and determining a target position in a virtual concert according to the selection instruction; determining a first audience transmission audio transmitted to the target location; acquiring the sound position of sound in the virtual concert, determining sound audio, and determining sound transmission audio transmitted to the target position by the sound audio according to the target position and the sound position; and mixing the first audience transmission audio and the sound transmission audio to obtain mixed audio transmitted to the target position.
In addition, the sound sensation processing method of the virtual concert according to the embodiment of the present invention may further have the following additional technical features:
according to one embodiment of the invention, determining a first audience transmission audio to transmit to the target location comprises: determining a first audience within a first preset range of the target position; acquiring a first position of each of the first audience and first audience audio acquired from a microphone of each of the first audience; determining, from the target location and each of the first locations, first audience-transmitted audio that each of the first audience audio transmitted to the target location; and mixing the audio transmitted by each first audience to obtain the audio transmitted by the first audience.
According to one embodiment of the invention, determining audio comprises: acquiring the singer position of a singer in the virtual concert and the headset audio of the singer; determining a second audience within a second preset range of the singer position; acquiring a second position of each of the second viewers and second viewer audio collected from microphones of each of the second viewers; determining, from the singer location and each of the second locations, second audience-transmitted audio for each of the second audience audio transmissions to the singer location; and mixing the headset audio and all the second audience audio to obtain the sound audio.
According to one embodiment of the present invention, determining second audience propagation audio for each of the second audience audio to the singer location based on the singer location and each of the second locations comprises: calculating a distance between each second audience and the singer according to the singer position and each second position; performing attenuation processing on the corresponding second audience audio according to each distance to obtain corresponding second audience attenuation audio; and mixing all the attenuated audios of the second audience to obtain the propagation audios of the second audience.
According to one embodiment of the present invention, the attenuating process is performed on each corresponding second audience audio according to each of the distances, and further includes: acquiring the frequency of each second audience audio; and attenuating the corresponding second audience audio according to each distance and each frequency.
According to one embodiment of the present invention, mixing all the second audience attenuated audio to obtain second audience propagation audio includes: calculating an audio delay time between each of the second viewers and the target singer according to each of the distances; performing delay processing on the attenuated audio of each second audience according to each audio delay time to obtain each second audience delay audio; and mixing the delayed audio of each second audience to obtain the second audience propagation audio.
According to one embodiment of the invention, the target location comprises a target head orientation and the first location comprises a first head orientation; determining first audience propagation audio for each of the first audience audio to propagate to the target location based on the target location and a first location of each of the first audience, comprising: determining a direction included angle between each first position and the target position according to the target head orientation and each first head orientation; and determining the first audience propagation audio when each first audience audio propagates to the target position according to all the direction included angles.
According to one embodiment of the present invention, a selection instruction is received, and before determining the target position in the virtual concert according to the selection instruction, the method further comprises: a network delay condition is obtained and a location range is provided based on the network delay condition.
To achieve the above object, an embodiment of a second aspect of the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a sound sensation processing method for a virtual concert as set forth in the embodiment of the first aspect of the present invention.
To achieve the above objective, an embodiment of a third aspect of the present invention provides an electronic device, including a memory and a processor, where the memory stores a computer program, and when the computer program is executed by the processor, the method for processing acoustic sensations of a virtual concert according to the embodiment of the first aspect of the present invention is implemented.
According to the sound sensation processing method of the virtual concert, the first audience transmission audio transmitted to the target position by the first audience around the target position in the virtual concert and the sound transmission audio transmitted to the target position by the sound box in the virtual concert are obtained, the first audience transmission audio and the sound transmission audio are mixed, interactivity is increased, reality of the concert is improved, a user at the target position feels atmosphere at the corresponding position in the real concert, and the user is personally on the scene.
According to the sound sensation processing method of the virtual concert, the first audience audio acquired by the microphone of the first audience is attenuated according to the propagation distance and the head direction, so that the first audience transmission audio transmitted to the target position is obtained, interactivity with the first audience is improved, and the reality of the concert is improved.
According to the sound sensation processing method of the virtual concert, after the second audience audio is collected through collecting the headset audio of the singer and the second audience audio of the second audience around the singer, the second audience audio is attenuated according to the propagation distance, propagation delay and frequency of the second audience audio, the second audience audio is determined to be propagated to the second audience propagation audio of the singer, the second audience propagation audio and the headset audio are mixed, and the audio frequency of audio play is determined, so that interactivity between the singer and the surrounding second audience is increased, and the reality of the concert is improved.
According to the sound sensation processing method of the virtual concert, an optional position range is provided for a user, a target position is determined by the user, and an audio effect at the corresponding position is generated according to the selected target position, so that the user experiences atmosphere at the corresponding position of the real concert, and the user is in the scene.
Drawings
FIG. 1 is a flow chart of a method of processing the sound sensation of a virtual concert according to one embodiment of the present invention;
FIG. 2 illustrates a schematic diagram of singer, stage, virtual audience positions in accordance with an embodiment of the present invention;
FIG. 3 is a flow chart of determining that a first audience is transmitting audio in accordance with an embodiment of the present invention;
FIG. 4 is a flow chart of determining a sound frequency in accordance with one embodiment of the present invention;
FIG. 5 is a flow chart of determining the second audience-transmitted audio in accordance with one embodiment of the present invention;
FIG. 6 is a flow chart of determining the second audience-transmitted audio in accordance with another embodiment of the present invention;
FIG. 7 is a schematic diagram of the different audio attenuation levels of one embodiment of the present invention;
FIG. 8 is a flow chart of a second audience propagation audio in accordance with yet another embodiment of the present invention;
FIG. 9 is a flow chart of determining that a first audience is transmitting audio in accordance with another embodiment of the present invention;
FIG. 10 is a schematic illustration of the effect of different head orientations on sound propagation in accordance with one embodiment of the present invention;
fig. 11 is a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative and intended to explain the present invention and should not be construed as limiting the invention.
The following describes in detail a sound sensation processing method, a storage medium, and an electronic device of a virtual concert according to an embodiment of the present invention with reference to fig. 1 to 11 of the specification and a specific implementation manner.
Fig. 1 is a flowchart of a sound sensation processing method of a virtual concert according to an embodiment of the present invention. FIG. 2 shows a schematic diagram of singer, stage, virtual audience positions in accordance with an embodiment of the present invention. As shown in fig. 1, the sound sensation processing method of the virtual concert may include:
s1, receiving a selection instruction, and determining a target position in the virtual concert according to the selection instruction.
Specifically, when the user experiences the virtual concert, the user can issue a selection instruction through the virtual concert system to select the position of the user in the virtual concert. When the target user enters the virtual concert, the virtual concert system provides a position range for the target audience, and determines the target position of the target user in the virtual concert according to a selection instruction issued by the target user.
S2, determining the first audience transmitted to the target position to transmit audio.
In order to simulate the real sound effect of the concert more truly, the embodiment of the invention not only acquires the sound transmitted to the target position, which can be heard by the target audience, but also transmits the sound transmitted by the audience around the target position.
According to the embodiment of the invention, the audio sent by the audience around the target position is acquired, the audio sent by the audience around the target position is processed, and the first audience transmitted to the target position is determined.
And S3, acquiring the sound positions of the sound in the virtual concert, determining sound audio, and determining sound transmission audio transmitted to the target position according to the target position and the sound position.
Specifically, the audio of the audio playback is determined (the singer's headset audio is played through the sound box), the audio of the audio playback is processed according to the target position and the audio position, and the audio transmission audio transmitted to the target position is determined.
S4, mixing the first audience transmission audio and the sound transmission audio to obtain mixed audio transmitted to the target position.
According to the embodiment of the invention, the first audience transmission audio and the sound transmission audio transmitted to the target position are mixed, the mixed audio transmitted to the target position is determined, and the mixed audio is played, so that the audience can experience the concert without leaving the home, and the user is in the scene.
In the embodiment of the present invention, the weight w1 of the first audience transmission audio and the weight w2 of the audio transmission audio may be adaptively set by the virtual concert system. The target audience can also set the weight w1 of the first audience transmission audio and the weight w2 of the sound transmission audio according to own requirements.
In one embodiment of the invention, as shown in FIG. 3, determining the first audience transmission audio to transmit to the target location may comprise:
s21, determining a first audience within a first preset range of the target position;
s22, acquiring a first position of each first audience and first audience audio acquired from a microphone of each first audience;
s23, determining each first audience audio transmitted to the target position according to the target position and each first position;
s24, mixing the audio transmitted by each first audience to obtain the audio transmitted by the first audience.
Because the sound is attenuated during the transmission, the sound emitted by the audience farther from the target location may not be transmitted to the target location, and the sound emitted by the audience closer to the target location, while being transmitted to the target location, may be attenuated accordingly. To further simulate a real concert environment, first audience audio of each first audience within a first preset range of the target position is attenuated.
Specifically, the first preset range may be determined according to the audio db propagation attenuation level, and the first audience within the first preset range of the target position may be determined according to the target position. A first location of each first audience is obtained, and first audience audio for each first audience is collected from microphones of each first audience. And attenuating each first audience audio according to the target position and each first position, determining each first audience transmission audio (each first audience audio after attenuation) transmitted to the target position by each first audience audio, and mixing each first audience transmission audio (each first audience audio after attenuation) to obtain the first audience transmission audio.
As a specific example, when the first audience audio is attenuated according to the target position and the first positions, the first audience audio may be attenuated with reference to GB/T17247.1-2000 acoustic outdoor sound propagation attenuation.
As another specific example, when the first audience audio is attenuated according to the target position and the first positions, the first audience audio may be attenuated according to the following propagation attenuation formula of the audio.
Propagation attenuation formula of audio:
Figure BDA0004144547880000051
where Δl represents the decibel attenuation value and r represents the distance between the target position and the corresponding first position.
By using the formula, the attenuation value of each first audience audio is calculated, and the first audience transmission audio can be determined according to each first audience audio and the corresponding attenuation value.
Since it takes time for sound to travel from emission to the target location, there is some delay in the transmission of sound from different locations to the target location. The embodiment of the invention simulates the sound of the surrounding audience (the first audience) heard at the target position more truly, and can delay the audio of the first audience besides the attenuation processing of the decibel of the audio of the first audience so as to obtain the transmission audio of the first audience more truly.
It should be noted that, in addition to acquiring the audio of the first audience within the first preset range of the target position, the audio of other audiences outside the first preset range of the target position can be acquired. Specifically, whether the audio decibels of other audiences outside the first preset range of the target position can be transmitted to the target position is judged, the audio of the other audiences outside the first preset range which can be transmitted to the target position can be attenuated and delayed, and the audio is mixed with the audio transmitted by the first audiences to obtain the audio transmitted by the first audiences.
In one embodiment of the invention, as shown in FIG. 4, determining the audible audio may include:
s31, obtaining the singer position of a singer in the virtual concert and the headset audio of the singer;
s32, determining a second audience within a second preset range of the singer position;
s33, acquiring a second position of each second audience and second audience audio acquired from microphones of each second audience;
s34, determining second audience transmission audio transmitted to the singer position by each second audience audio according to the singer position and each second position;
s35, mixing the headset audio and all second audience audio to obtain sound audio.
The earphone is affected by the wheat receiving sound of the singer, and the audio sent by audiences around the singer can be received in addition to the audio sent by the singer. According to the embodiment of the invention, the earphone audio of the singer and the audio sent by the audience around the singer are mixed, and the audio played by the sound is determined, so that the audience is placed in a real concert.
Specifically, a singer position of a target singer in the virtual concert and a headset audio of the target singer are obtained, and a second audience within a second preset range of the singer position is determined according to the singer position, wherein the second preset range can be determined according to the audio decibel propagation attenuation degree. A second location of each second audience is acquired and second audience audio acquired from a microphone of each second audience. The second audience audio sent by each second audience is attenuated when being transmitted to the singer position, and attenuation processing is carried out on the second audience audio sent by each second audience according to the singer position and each second position, so that the second audience transmission audio transmitted to the singer position by each second audience audio is determined. And mixing the headset audio and all the second audience audio to obtain sound audio.
In one embodiment of the present invention, as shown in fig. 5, determining the second audience propagation audio from the singer location and each second location to the second audience propagation audio of the singer location, comprises:
s36, calculating the distance between each second audience and the singer according to the singer position and each second position;
s37, carrying out attenuation processing on the corresponding second audience audio according to each distance to obtain corresponding second audience attenuation audio;
s38, mixing all the attenuated audios of the second audience to obtain the propagation audios of the second audience.
Specifically, when the second audience is determined to transmit audio, the distance between each second audience and the singer is calculated based on the acquired singer position and each second position. And may utilize the propagation attenuation methods described above, such as using the propagation attenuation formula:
Figure BDA0004144547880000061
and calculating attenuation values of the second audience audios of the second audiences, and determining corresponding second audience attenuation audios according to the second audience audios of the second audiences and the corresponding attenuation values. Mixing all the attenuated second audience audio (attenuated second audience audio) to obtainThe audio is propagated to the second audience.
In one embodiment of the present invention, as shown in fig. 6, the attenuating process is performed on each corresponding second audience audio according to each distance, and further includes:
s371, obtaining the frequency of each second audience audio;
and S372, performing attenuation processing on the corresponding second audience audio according to each distance and each frequency.
Because the high-frequency attenuation is fast and the low-frequency attenuation is slow, in order to further enable the audience to experience a real concert, the frequency of each second audience audio is obtained, and the attenuation processing is carried out on the corresponding second audience audio according to each distance and each frequency. Wherein fig. 7 shows the different audio attenuation levels.
In one embodiment of the present invention, as shown in fig. 8, mixing all the second audience attenuated audio to obtain the second audience transmitted audio may include:
s381, calculating the audio delay time between each second audience and the target singer according to each distance;
s382, carrying out delay processing on the attenuated audio of each second audience according to each audio delay time to obtain each second audience delay audio;
s383, mixing the delayed audio of each second audience to obtain the propagation audio of the second audience.
Since the concert is a relatively large scene, the sound takes time from being emitted to being propagated to the singer, and there is a delay in the singer position of the second audience audio transmission of each second audience. It is assumed that the propagation speed of sound is 346m/s at room temperature of 25 ℃. Then the sound made by a certain audience (X3, Y3) is transmitted to the target singer (X4, Y4) and is needed
Figure BDA0004144547880000062
Therefore, the second audience audio can be further subjected to delay processing while being subjected to attenuation processing, so that the user can further experience a real concert.
Specifically, the audio delay time between each second viewer and the target singer, that is, the time required for the second viewer audio collected from each second viewer to be transmitted to the target singer, is calculated based on the distance between each second viewer and the target singer. And according to the audio delay time, carrying out delay processing on the attenuated audio of each second audience to obtain the delayed audio of each second audience, and mixing the delayed audio of each second audience to obtain the propagation audio of the second audience.
In view of the network delay, in determining the location of each viewer, high-delay users are ranked in the back row and low-delay users are ranked in the front row. Assuming a delay of ts, the delay is subtracted in calculating the second audience audio to reach the singer position, i.e.:
Figure BDA0004144547880000063
the formula needs to be satisfied
Figure BDA0004144547880000071
Above t, the time that the second viewer audio passes from the second viewer to the singer must be greater than the delay. Thus, when a user enters a virtual concert, different users are provided with different location choices according to the average delay.
In one embodiment of the present invention, the method for processing the sound sensation of the virtual concert may further include:
the network delay condition is obtained and a location range is provided based on the network delay condition.
Specifically, the virtual concert system acquires the network delay condition of the user before receiving the selection instruction issued by the user, and provides the location range according to the network delay condition. If the network delay condition of the user is good and the network speed is greater than a preset threshold, providing a position range as a position close to the singer. If the network delay condition of the user is poor, the network speed is smaller than a preset threshold value, and the position range is provided as a position far away from the singer. Thereby enabling the user to issue a selection instruction according to the provided position range.
In one embodiment of the present invention, determining audio transmission audio from a target location and an audio location for transmission of audio to the target location comprises:
calculating the distance between the target position and the sound position according to the target position and the sound position;
and carrying out attenuation processing on the audio according to the target position and the distance between the sound positions to obtain audio transmission audio transmitted to the target position.
It should be noted that the number of the sound boxes in the embodiment of the invention may be one or more (greater than or equal to two).
When the acoustic transmission audio transmitted to the target position is determined according to the target position and the acoustic position, calculating the distance between the target position and each acoustic position according to the target position and the acoustic position, carrying out attenuation processing on each acoustic audio according to the distance between the target position and each acoustic position, obtaining the acoustic transmission audio transmitted to the target position, and mixing each acoustic transmission audio and the first audience transmission audio.
In an embodiment of the present invention, the audio is attenuated according to a distance between a target position and each of the audio positions, including attenuating a decibel of the audio according to a distance between a target position and each of the audio positions, attenuating a frequency of the audio, and delaying each of the audio frequencies.
It should be noted that, the principle of performing attenuation processing on the decibels of the audio, performing attenuation processing on the frequencies of the audio, and performing delay processing on each audio is the same as the principle of performing delay processing on the second audience audio, and is not described herein.
In the embodiment of the invention, the target user can be picked up by the target user microphone to make a sound, and the sound made by the target user is mixed with the first audience transmission audio and the sound transmission audio, so that the interaction among the user, the first audience, the second audience and the singer is improved.
In one embodiment of the invention, as shown in FIG. 9, the target location comprises a target head orientation, and the first location comprises a first head orientation; determining first audience propagation audio for each first audience audio to propagate to the target location based on the target location and the first location of each first audience, comprising:
s231, determining a direction included angle between each first position and the target position according to the target head direction and each first head direction;
s232, determining the first audience transmission audio when each first audience audio is transmitted to the target position according to all the direction included angles.
Referring to fig. 10, since two users a and B, that is, the viewer a and the viewer B in fig. 10, if the viewer B speaks toward the viewer a, the decibels are relatively large, and if the viewer B speaks away from the viewer a, the decibels are reduced. To further determine the first audience propagation audio for each first audience as it propagates to the target location. When a user wears VR glasses or headphones with a gyroscope, the head orientation of the user (centered on a target viewing device, such as a mobile phone, a computer, or a television) can be obtained. I.e. when determining the target position of the target user in the virtual concert and the first position of the first audience in the virtual concert, the head orientations, i.e. the spatial coordinates (X, Y, Z), of the target user and the first audience are further acquired. Where (X, Y,) represents the position coordinates in the virtual concert and Z represents the head orientation.
The direction included angle between each first position and each target position can be determined according to the target head direction and each first head direction, and an attenuation threshold U can be preset to simulate the situation, wherein the first audience heard by the final target user is equal to u×BETA, and BETA is the decibel of the first audience audio. Regarding the value of U, it may be set according to the angle of the target user with respect to the first audience (other users within a first preset range of the target location). For example, if face-to-face is 0 degrees, then-45-45 degrees, the value of U is U1, 45-90 is U2, and so on.
And carrying out attenuation processing on the audio of each first audience according to the distance between the target position and each first position and the corresponding direction included angle, and determining the first audience transmission audio when the first audience is transmitted to the target position.
According to the sound sensation processing method of the virtual concert, according to the target position selected by the user in the virtual concert, the first audience transmission audio and the sound transmission audio transmitted to the target position are determined, and the first audience transmission audio and the sound transmission audio transmitted to the target position are mixed, so that the user at the target position feels the atmosphere at the corresponding position in the real concert, and the user is in the scene.
The invention provides a computer readable storage medium.
In this embodiment, a computer program is stored on a computer readable storage medium, and when the computer program is executed by a processor, the sound sensation processing method of the virtual concert as described above is implemented.
The invention provides an electronic device.
In this embodiment, the electronic device 500 includes a memory 503 and a processor 501, where the memory 503 stores a computer program, and when the computer program is executed by the processor 501, the above-mentioned sound sensation processing method of the virtual concert is implemented.
Fig. 11 is a block diagram of an electronic device according to an embodiment of the present invention. As shown in fig. 11, the electronic device 500 includes: a processor 501 and a memory 503. The processor 501 is coupled to a memory 503, such as via a bus 502. Optionally, the electronic device 500 may also include a transceiver 504. It should be noted that, in practical applications, the transceiver 504 is not limited to one, and the structure of the electronic device 500 is not limited to the embodiment of the present invention.
The processor 501 may be a CPU (Central Processing Unit ), general purpose processor, DSP (Digital Signal Processor, data signal processor), ASIC (Application Specific Integrated Circuit ), FPGA (Field Programmable Gate Array, field programmable gate array) or other programmable logic device, transistor logic device, hardware components, or any combination thereof. Which may implement or perform the various exemplary logical blocks, modules, and circuits described in connection with the present disclosure. The processor 501 may also be a combination that implements computing functionality, such as a combination comprising one or more microprocessors, a combination of a DSP and a microprocessor, and the like.
Bus 502 may include a path to transfer information between the components. Bus 502 may be a PCI (Peripheral Component Interconnect, peripheral component interconnect Standard) bus or an EISA (Extended Industry Standard Architecture ) bus, or the like. The bus 502 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 11, but not only one bus or one type of bus.
The memory 503 is used to store a computer program corresponding to the sound sensation processing method of the virtual concert according to the above-described embodiment of the present invention, which is controlled to be executed by the processor 501. The processor 501 is configured to execute a computer program stored in the memory 503 to implement what is shown in the foregoing method embodiments.
Among other things, electronic device 500 includes, but is not limited to: mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device 500 shown in fig. 11 is merely an example, and should not be construed as limiting the functionality and scope of use of embodiments of the present invention.
The computer readable storage medium and the electronic device of the embodiment of the invention determine the first audience transmission audio and the sound transmission audio transmitted to the target position by utilizing the sound sensation processing method of the virtual concert, mix the first audience transmission audio and the sound transmission audio transmitted to the target position, determine the mixed audio transmitted to the target position, play the mixed audio, realize that the audience does not leave the home to experience the concert, and make the user personally on the scene.
It should be noted that the logic and/or steps represented in the flowcharts or otherwise described herein, for example, may be considered as a ordered listing of executable instructions for implementing logical functions, and may be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
In the description of the present invention, it should be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", "axial", "radial", "circumferential", etc. indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings are merely for convenience in describing the present invention and simplifying the description, and do not indicate or imply that the device or element being referred to must have a specific orientation, be configured and operated in a specific orientation, and therefore should not be construed as limiting the present invention.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present invention, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise.
In the present invention, unless explicitly specified and limited otherwise, the terms "mounted," "connected," "secured," and the like are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; either directly or indirectly, through intermediaries, or both, may be in communication with each other or in interaction with each other, unless expressly defined otherwise. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances.
In the present invention, unless expressly stated or limited otherwise, a first feature "up" or "down" a second feature may be the first and second features in direct contact, or the first and second features in indirect contact via an intervening medium. Moreover, a first feature being "above," "over" and "on" a second feature may be a first feature being directly above or obliquely above the second feature, or simply indicating that the first feature is level higher than the second feature. The first feature being "under", "below" and "beneath" the second feature may be the first feature being directly under or obliquely below the second feature, or simply indicating that the first feature is less level than the second feature.
While embodiments of the present invention have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the invention, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the invention.

Claims (10)

1. A method for processing the sound sensation of a virtual concert, the method comprising:
receiving a selection instruction, and determining a target position in a virtual concert according to the selection instruction;
determining a first audience transmission audio transmitted to the target location;
acquiring the sound position of sound in the virtual concert, determining sound audio, and determining sound transmission audio transmitted to the target position by the sound audio according to the target position and the sound position;
and mixing the first audience transmission audio and the sound transmission audio to obtain mixed audio transmitted to the target position.
2. The method of claim 1, wherein determining the first audience transmission audio to the target location comprises:
determining a first audience within a first preset range of the target position;
acquiring a first position of each of the first audience and first audience audio acquired from a microphone of each of the first audience;
determining, from the target location and each of the first locations, first audience-transmitted audio that each of the first audience audio transmitted to the target location;
and mixing the audio transmitted by each first audience to obtain the audio transmitted by the first audience.
3. The method of claim 1, wherein determining the audio frequency comprises:
acquiring the singer position of a singer in the virtual concert and the headset audio of the singer;
determining a second audience within a second preset range of the singer position;
acquiring a second position of each of the second viewers and second viewer audio collected from microphones of each of the second viewers;
determining, from the singer location and each of the second locations, second audience-transmitted audio for each of the second audience audio transmissions to the singer location;
and mixing the headset audio and all the second audience audio to obtain the sound audio.
4. The method of claim 3, wherein determining, based on the singer location and each of the second locations, a second audience-transmitted audio that each of the second audience audio transmitted to the singer location comprises:
calculating a distance between each second audience and the singer according to the singer position and each second position;
performing attenuation processing on the corresponding second audience audio according to each distance to obtain corresponding second audience attenuation audio;
and mixing all the attenuated audios of the second audience to obtain the propagation audios of the second audience.
5. The method of claim 4, wherein attenuating each corresponding second audience audio according to each distance, further comprising:
acquiring the frequency of each second audience audio;
and attenuating the corresponding second audience audio according to each distance and each frequency.
6. The method for processing the sound sensation of a virtual concert according to claim 4, wherein mixing all the second audience-attenuated audio to obtain second audience-transmitted audio comprises:
calculating an audio delay time between each of the second viewers and the target singer according to each of the distances;
performing delay processing on the attenuated audio of each second audience according to each audio delay time to obtain each second audience delay audio;
and mixing the delayed audio of each second audience to obtain the second audience propagation audio.
7. The method of claim 2, wherein the target location comprises a target head orientation and the first location comprises a first head orientation; determining first audience propagation audio for each of the first audience audio to propagate to the target location based on the target location and a first location of each of the first audience, comprising:
determining a direction included angle between each first position and the target position according to the target head orientation and each first head orientation;
and determining the first audience propagation audio when each first audience audio propagates to the target position according to all the direction included angles.
8. The sound sensation processing method of a virtual concert according to claim 1, characterized in that it receives a selection instruction and determines a target position in the virtual concert based on the selection instruction, the method further comprising:
a network delay condition is obtained and a location range is provided based on the network delay condition.
9. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the sound sensation processing method of a virtual concert according to any one of claims 1-8.
10. An electronic device comprising a memory, a processor, the memory having stored thereon a computer program, wherein the computer program, when executed by the processor, implements the method of acoustic sensation processing of a virtual concert according to any of claims 1-8.
CN202310296647.2A 2023-03-23 2023-03-23 Sound sensation processing method of virtual concert, storage medium and electronic equipment Pending CN116437282A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310296647.2A CN116437282A (en) 2023-03-23 2023-03-23 Sound sensation processing method of virtual concert, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310296647.2A CN116437282A (en) 2023-03-23 2023-03-23 Sound sensation processing method of virtual concert, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN116437282A true CN116437282A (en) 2023-07-14

Family

ID=87093591

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310296647.2A Pending CN116437282A (en) 2023-03-23 2023-03-23 Sound sensation processing method of virtual concert, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN116437282A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060008117A1 (en) * 2004-07-09 2006-01-12 Yasusi Kanada Information source selection system and method
KR20170131059A (en) * 2016-05-20 2017-11-29 박건웅 My-concert system
US10484811B1 (en) * 2018-09-10 2019-11-19 Verizon Patent And Licensing Inc. Methods and systems for providing a composite audio stream for an extended reality world
CN110493703A (en) * 2019-07-24 2019-11-22 天脉聚源(杭州)传媒科技有限公司 Stereo audio processing method, system and the storage medium of virtual spectators
TWI706292B (en) * 2019-05-28 2020-10-01 醒吾學校財團法人醒吾科技大學 Virtual Theater Broadcasting System
CN112882568A (en) * 2021-01-27 2021-06-01 深圳市慧鲤科技有限公司 Audio playing method and device, electronic equipment and storage medium
CN113965869A (en) * 2021-09-09 2022-01-21 深圳市广程杰瑞科技有限公司 Sound effect processing method, device, server and storage medium
WO2022018786A1 (en) * 2020-07-20 2022-01-27 株式会社ウフル Sound processing system, sound processing device, sound processing method, and sound processing program
WO2022264536A1 (en) * 2021-06-15 2022-12-22 ソニーグループ株式会社 Information processing device, information processing method, and information processing system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060008117A1 (en) * 2004-07-09 2006-01-12 Yasusi Kanada Information source selection system and method
KR20170131059A (en) * 2016-05-20 2017-11-29 박건웅 My-concert system
US10484811B1 (en) * 2018-09-10 2019-11-19 Verizon Patent And Licensing Inc. Methods and systems for providing a composite audio stream for an extended reality world
TWI706292B (en) * 2019-05-28 2020-10-01 醒吾學校財團法人醒吾科技大學 Virtual Theater Broadcasting System
CN110493703A (en) * 2019-07-24 2019-11-22 天脉聚源(杭州)传媒科技有限公司 Stereo audio processing method, system and the storage medium of virtual spectators
WO2022018786A1 (en) * 2020-07-20 2022-01-27 株式会社ウフル Sound processing system, sound processing device, sound processing method, and sound processing program
CN112882568A (en) * 2021-01-27 2021-06-01 深圳市慧鲤科技有限公司 Audio playing method and device, electronic equipment and storage medium
WO2022264536A1 (en) * 2021-06-15 2022-12-22 ソニーグループ株式会社 Information processing device, information processing method, and information processing system
CN113965869A (en) * 2021-09-09 2022-01-21 深圳市广程杰瑞科技有限公司 Sound effect processing method, device, server and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
RASMUS B. LIND: "Sound design in virtual reality concert experiences using a wave field synthesis approach", 《2017 IEEE VIRTUAL REALITY (VR)》, 6 April 2017 (2017-04-06) *
周远: "探析演出中舞台音响的调试技巧与效果关系", 《数码世界》, 1 June 2018 (2018-06-01) *
蔡卓尔: "线上演唱会的虚拟在场研究 ——基于互动仪式链视角", 《新媒体研究》, 20 July 2022 (2022-07-20) *
陈一奔: "扩展现实技术介入下的云端演唱会景观——基于元宇宙空间虚拟架构的视角", 《湖北文理学院学报》, 31 December 2022 (2022-12-31) *

Similar Documents

Publication Publication Date Title
US9197755B2 (en) Multidimensional virtual learning audio programming system and method
US9693170B2 (en) Multidimensional virtual learning system and method
US6798889B1 (en) Method and apparatus for multi-channel sound system calibration
EP2926572B1 (en) Collaborative sound system
US20130324031A1 (en) Dynamic allocation of audio channel for surround sound systems
KR101839504B1 (en) Audio Processor for Orientation-Dependent Processing
US9930469B2 (en) System and method for enhancing virtual audio height perception
US20120155671A1 (en) Information processing apparatus, method, and program and information processing system
JP6111611B2 (en) Audio amplifier
CN116437282A (en) Sound sensation processing method of virtual concert, storage medium and electronic equipment
WO2016208487A1 (en) Content playback device, content playback system, and content playback method
CN110603822B (en) Audio processing device and audio processing method
CN113709631B (en) Surround sound system and method for applying surround sound technology to electronic contest seats
CN118170339A (en) Audio control method, audio control device, medium and electronic equipment
CN116036591A (en) Sound effect optimization method, device, equipment and storage medium
CN117130575A (en) Control method, device, equipment and storage medium
CN114866948A (en) Audio processing method and device, electronic equipment and readable storage medium
CN116785710A (en) Sound playing method, device, equipment and storage medium
CN116996701A (en) Audio processing method, device, electronic equipment and storage medium
Karlsson et al. 3D Audio for Mobile Devices via Java

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination