CN114650496A - Audio playing method and electronic equipment - Google Patents

Audio playing method and electronic equipment Download PDF

Info

Publication number
CN114650496A
CN114650496A CN202210225832.8A CN202210225832A CN114650496A CN 114650496 A CN114650496 A CN 114650496A CN 202210225832 A CN202210225832 A CN 202210225832A CN 114650496 A CN114650496 A CN 114650496A
Authority
CN
China
Prior art keywords
electronic device
state information
target
audio data
spatial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210225832.8A
Other languages
Chinese (zh)
Inventor
文梁宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202210225832.8A priority Critical patent/CN114650496A/en
Publication of CN114650496A publication Critical patent/CN114650496A/en
Priority to PCT/CN2023/079874 priority patent/WO2023169367A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/305Electronic adaptation of stereophonic audio signals to reverberation of the listening space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/162Interface to dedicated audio devices, e.g. audio drivers, interface to CODECs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback

Abstract

The application discloses an audio playing method and electronic equipment, and belongs to the technical field of electronics. The audio playing method comprises the following steps: acquiring a control instruction of space state information; determining space state information of the first electronic equipment and each second electronic equipment according to the control instruction; according to each piece of spatial state information, performing spatial mixing processing on the audio data played by the first electronic equipment and the audio data played by each piece of second audio data to obtain first target audio data; and sending the first target audio data to the audio playing device for playing. The spatial audio mixing processing is carried out according to the spatial state information respectively corresponding to the first electronic equipment and the at least one second electronic equipment, so that the phenomenon that a user is disorderly in direction when listening to different audio data played by a plurality of electronic equipment located at different positions at the same time is avoided.

Description

Audio playing method and electronic equipment
Technical Field
The application belongs to the technical field of electronics, and particularly relates to an audio playing method and electronic equipment.
Background
With the development of electronic technology, the use of electronic devices is becoming more and more common. In practical application scenarios, a user may use a plurality of electronic devices to play different audio data simultaneously, for example, while playing a game on a computer, a live broadcast is hung on a mobile phone.
If the public audio playing mode is adopted, other people may be interfered, and the privacy is poor. If play multiple audio data simultaneously through the earphone, no matter which direction electronic equipment is relative to the user, the user can all perceive that the sound is coming from the dead ahead, and there is not the time difference when multiple different audio data reachs the ear. Therefore, it may be difficult for the user to distinguish from which electronic device the audio data comes, resulting in a sense of disorientation.
Disclosure of Invention
The embodiment of the application aims to provide an audio playing device and an electronic device, which can solve the problem of how to avoid the sense of disorientation generated when a user listens to a plurality of different audio data.
In a first aspect, an embodiment of the present application provides an audio playing method, which is applied to a first electronic device, where the first electronic device is connected to at least one second electronic device, and the first electronic device is connected to an audio playing device, and the audio playing method includes:
acquiring a control instruction of space state information;
determining space state information of the first electronic equipment and each second electronic equipment according to the control instruction;
according to each piece of spatial state information, performing spatial mixing processing on the audio data played by the first electronic equipment and the audio data played by each piece of second audio data to obtain first target audio data;
and sending the first target audio data to the audio playing device for playing.
In a second aspect, an embodiment of the present application provides an audio playing apparatus, which is applied to a first electronic device, where the first electronic device is connected to at least one second electronic device, and the first electronic device is connected to an audio playing device, and the audio playing apparatus includes:
the acquisition module is used for acquiring a control instruction of the space state information;
the determining module is used for determining the space state information of the first electronic equipment and each second electronic equipment according to the control instruction;
the processing module is used for performing spatial mixing processing on the audio data played by the first electronic device and the audio data played by each second audio data according to each piece of spatial state information to obtain first target audio data;
and the sending module is used for sending the first target audio data to the audio playing equipment for playing.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the steps of the audio playing method according to the first aspect.
In a fourth aspect, the present application provides a readable storage medium, on which a program or instructions are stored, and when executed by a processor, the program or instructions implement the steps of the audio playing method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the audio playing method according to the first aspect.
In the embodiment of the application, a control instruction of space state information is acquired; determining space state information of the first electronic equipment and each second electronic equipment according to the control instruction; according to each piece of spatial state information, performing spatial mixing processing on the audio data played by the first electronic equipment and the audio data played by each piece of second audio data to obtain first target audio data; and sending the first target audio data to the audio playing device for playing. Through the technical scheme of the embodiment of the application, the spatial sound mixing processing can be performed according to the spatial state information respectively corresponding to the first electronic device and the at least one second electronic device, and the phenomenon that a user generates a sense of disorientation when listening to different audio data played by a plurality of electronic devices located at different positions at the same time is avoided.
Drawings
Fig. 1 is a first flowchart illustrating an audio playing method according to an embodiment of the present application;
fig. 2 is a schematic connection relationship diagram of a first electronic device, a second electronic device and an audio playing device according to an embodiment of the present application;
fig. 3 is a diagram of a setting interface of a spatial audio state in an audio playing method according to an embodiment of the present application;
fig. 4A is a schematic view of a first scenario of an audio playing method according to an embodiment of the present application;
fig. 4B is a schematic diagram of a second scenario of an audio playing method according to an embodiment of the present application;
fig. 5 is a schematic diagram illustrating a third scenario of an audio playing method according to an embodiment of the present application;
fig. 6 is a schematic flowchart of a second audio playing method according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an audio playing apparatus according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 9 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The following describes in detail the audio playing method provided by the embodiment of the present application through specific embodiments and application scenarios thereof with reference to the accompanying drawings.
Fig. 1 is a schematic flowchart of a first audio playing method according to an embodiment of the present application.
The audio playing method is applied to first electronic equipment, the first electronic equipment is connected with at least one second electronic equipment, and the first electronic equipment is connected with the audio playing equipment.
The first electronic device may be an electronic device having an audio data processing function and an audio playing function, such as a computer, a mobile phone, a tablet, and the like. The second electronic device may be an electronic device with an audio data playing function, such as a computer, a mobile phone, a tablet, and the like.
The at least one second electronic device may be one second electronic device or a plurality of second electronic devices.
The audio playing device can be a headset or other audio electronic devices.
Fig. 2 is a schematic connection relationship diagram of a first electronic device, a second electronic device, and an audio playing device according to an embodiment of the present application. As shown in fig. 2, a first electronic device 201 is connected to a second electronic device 202, and the first electronic device 201 is connected to an earphone 203. The user wears the headphones 203 to listen to the audio data played by the first electronic device 201 and the audio data played by the second electronic device 202 at the same time. In fig. 2, the user faces the second electronic device 202, and can also view a video screen played by the second electronic device 202.
And 102, acquiring a control instruction of the space state information.
The spatial state information may be spatial position information, spatial position information and audio parameters, and spatial audio state. The audio parameters may be volume information, timbre information, etc. of various parameters that may affect the auditory effect.
The spatial position information may be spatial position information of a virtual sound source corresponding to the first electronic device in a preset sphere, or spatial position information of a virtual sound source corresponding to any one of the second electronic devices in the preset sphere. The volume information may be the volume of the first electronic device, or may be the volume of any second electronic device.
The control instruction may be a setting instruction, a modification instruction, a detection instruction, or the like.
It should be noted that the volume information may be adjusted independently, or may be adjusted simultaneously with the spatial position information, or may be adjusted to the volume information corresponding to the adjusted spatial position information after the spatial position information is adjusted.
Optionally, the spatial state information includes spatial position information and an audio parameter, and the obtaining of the control instruction of the spatial state information includes: receiving a first control instruction aiming at the spatial position information; and/or receiving a second control instruction for the audio parameter.
The first control instruction may be a setting instruction or a modification instruction for spatial position information of the first electronic device, or may be a setting instruction or a modification instruction for spatial position information of at least one second electronic device.
Optionally, receiving a first control instruction for spatial location information includes: in the first electronic device and at least one second electronic device, aiming at any one electronic device, on a user interaction interface, determining position information of a virtual sound source corresponding to the electronic device in a preset sphere as space position information of the electronic device; receiving a position adjusting instruction of a virtual sound source; the position adjusting instruction is used for adjusting the position information of the virtual sound source in the preset sphere.
The user interaction interface may refer to fig. 3. Fig. 3 is a setting interface diagram of a spatial audio state in an audio playing method according to an embodiment of the present application. The setting interface diagram of the spatial audio state shows a spatial audio state one and a spatial audio state two.
Spatial position information of the first electronic device, the second electronic device 1, the second electronic device 2, …, and the second electronic device n corresponding to the spatial audio state one, and spatial position information of the first electronic device, the second electronic device 1, the second electronic device 2, …, and the second electronic device n corresponding to the spatial audio state two can be referred to fig. 3.
The setting interface diagram of the spatial audio state also shows parameter setting interfaces of the first electronic device volume, the second electronic device 1 volume, the second electronic device 2, … second electronic device n volume in the spatial audio state one, and parameter setting interfaces of the first electronic device volume, the second electronic device 1 volume, the second electronic device 2, … second electronic device n volume in the spatial audio state two.
In the first spatial audio state, the position indicated by the arrow corresponding to the first electronic device is the spatial position of the first electronic device on the sphere, and similarly, the position indicated by the arrow corresponding to each second electronic device is the spatial position of each second electronic device on the sphere. The spatial audio state two is similar to the spatial audio state one, and the description thereof is omitted here.
The position adjusting instruction may be to add a new virtual sound source in a preset sphere and set position information of the virtual sound source, or to adjust a virtual sound source from preset initial position information to target position information meeting user requirements in the preset sphere, or to adjust a virtual sound source from position information meeting old hearing requirements of a user to position information meeting new hearing requirements of the user in the preset sphere.
The second control instruction may be a setting instruction or a modification instruction for an audio parameter of the first electronic device, or may be a setting instruction or a modification instruction for an audio parameter of at least one second electronic device.
The audio parameters may be volume information, timbre information, etc. of various parameters that may affect the auditory effect. The following description takes the audio parameter as the volume information as an example:
the user may preset volume information in different scenes, and specifically, may configure a plurality of volume states, and volume information of the first electronic device and volume information of each second electronic device in each volume state. The second control instruction may also be a switch instruction for the volume state.
For example, the volume of audio data played by a first electronic device facing the front of the user is larger, and the volume of audio data played by a second electronic device on both sides of the user is smaller.
Optionally, the audio parameter comprises volume information; receiving a second control instruction for the audio parameter, comprising: in the first electronic device and the at least one second electronic device, a volume adjusting instruction for any one electronic device is received on a user interaction interface.
The user interaction interface may refer to fig. 3. In the user interaction interface as shown in fig. 3, the volume information of each of the first electronic device and the at least one second electronic device may be set separately, and the volume information may be increased or decreased.
Optionally, the first electronic device is a master tone device located directly in front of the user; each second electronic device is a consonant device located not directly in front of the user; the control instruction for acquiring the space state information comprises the following steps: acquiring a first main and auxiliary switching instruction; the first main and auxiliary switching instruction is used for changing the space state information of the first electronic equipment and each second electronic equipment, so that the space state information of the selected second electronic equipment corresponds to the front of the user, and the space state information of the first electronic equipment corresponds to the non-front of the user; or, the at least one second electronic device comprises a target second electronic device; the target second electronic equipment is a main sound equipment, and the first electronic equipment is a consonant equipment; the control instruction for acquiring the space state information comprises the following steps: acquiring a second main and auxiliary switching instruction; the second switching instruction is used for changing the spatial state information of the first electronic device and the target second electronic device, so that the spatial state information of the first electronic device corresponds to the front of the user, and the spatial state information of the target second electronic device corresponds to the non-front of the user.
The master tone equipment is positioned right in front of the user, and the attention of the user is mainly focused on the audio data played by the master tone equipment; the consonant devices are located off-front of the user, which is understood to mean that the user's attention to the audio data played by the consonant device is prioritized behind the audio data played by the consonant device.
For example, a computer located right in front of the user is playing a web class, the computer is a main sound device, a mobile phone located on the left side of the user is playing a shopping live broadcast, the mobile phone is a consonant device, the user focuses on the web class, and the attention on the live broadcast audio data is lower than that of the web class.
In case the at least one second electronic device comprises a target second electronic device, the following possibilities exist for the switching between the consonant device and the consonant device:
(a1) the first electronic device is a primary sound device, the target second electronic device is a consonant device, the second electronic device is switched to the primary sound device through the first primary and secondary switching instruction, and the first electronic device is switched to the consonant device.
(a2) And the first electronic equipment is consonant equipment, the target second electronic equipment is master sound equipment, and the first electronic equipment is switched into the master sound equipment and the second electronic equipment is switched into the consonant equipment through a second master-slave switching instruction.
For example, the first electronic device is a consonant device and is located right in front of the user, the target second electronic device is a consonant device and is located on the left side of the user, and when the user turns to face the target second electronic device, the spatial state information of the first electronic device and the target second electronic device is changed through the first primary-secondary switching instruction.
In the case where the at least one second electronic device includes at least two second electronic devices, taking the case where the at least one second electronic device includes the second electronic device 1 and the second electronic device 2 as an example, switching between the consonant device and the consonant device in this case is explained:
(b1) the first electronic equipment is a primary sound equipment, the second electronic equipment 1 and the second electronic equipment 2 are consonant equipment, the selected second electronic equipment 1 is switched to be the primary sound equipment through a first primary and secondary switching instruction, and the first electronic equipment is switched to be the consonant equipment;
(b2) the first electronic equipment is a primary sound equipment, the second electronic equipment 1 and the second electronic equipment 2 are consonant equipment, the selected second electronic equipment 2 is switched to be the primary sound equipment through a first primary and secondary switching instruction, and the first electronic equipment is switched to be the consonant equipment;
(b3) the second electronic device 2 is a primary sound device, the first electronic device and the second electronic device 1 are consonant devices, the selected first electronic device is switched to the primary sound device through a third primary and secondary switching instruction, and the second electronic device 2 is switched to the consonant device;
(b4) the second electronic equipment 2 is a primary sound equipment, the first electronic equipment and the second electronic equipment 1 are consonant equipment, and the selected second electronic equipment 1 is switched to be the primary sound equipment and the second electronic equipment 2 is switched to be the consonant equipment through a third primary and secondary switching instruction;
(b5) the second electronic device 1 is a primary sound device, the first electronic device and the second electronic device 2 are consonant devices, the selected first electronic device is switched to the primary sound device through a third primary and secondary switching instruction, and the second electronic device 1 is switched to the consonant device;
(b6) the second electronic device 1 is a primary sound device, the first electronic device and the second electronic device 2 are consonant devices, the selected second electronic device 2 is switched to be the primary sound device through a third primary and secondary switching instruction, and the second electronic device 1 is switched to be the consonant device.
The first main and auxiliary switching instruction at least comprises the following implementation modes:
(c1) and performing control operation aiming at the space state information on a preset user interaction interface.
For example, a preset sphere is rotated, so that the position information of the virtual sound source corresponding to the electronic device selected as the primary sound device in the preset sphere reaches the preset position corresponding to the primary sound device after the virtual sound source is rotated, and in this case, the virtual sound source originally at the preset position moves to another position in the sphere after the virtual sound source is rotated.
For another example, in the first electronic device and the at least one second electronic device, each electronic device corresponds to one virtual sound source. Dragging the virtual sound source in the preset sphere to change the position information of the virtual sound source in the preset sphere. The user can drag the virtual sound source corresponding to the first electronic device, so that the virtual sound source is away from the preset position corresponding to the main sound device; the user can also drag the virtual sound source corresponding to the second electronic device to the preset position.
For example, a plurality of preset spatial audio states are displayed on the user interaction interface, the first electronic device in the spatial audio state 1 is a consonant device, and the target second electronic device is a consonant device; the target second electronic device in the spatial audio state 2 is a consonant device and the first electronic device is a consonant device. Currently in a spatial audio state 1, and switching to a spatial audio state 2 according to user operation.
(c2) The pupil fixation point is moved.
For example, a user initially looks at a computer directly in front and then turns his head to look at a cell phone to the left.
The second main-auxiliary switching instruction and the third main-auxiliary switching instruction are similar to the first main-auxiliary switching instruction, and are not described herein again.
The spatial state information of the second electronic device corresponds to the front of the user, and the position information of the virtual sound source corresponding to the second electronic device in the preset sphere may be located at a preset position of the keynote device in the preset sphere, or the pupil fixation point may be located in the second electronic device.
In this case, the audio data played by the second electronic device sounds as if it were coming from the front.
The spatial state information of the first electronic device corresponds to a position not directly in front of the user, the position information of the virtual sound source corresponding to the first electronic device in the preset sphere may not be at the preset position of the main sound device in the preset sphere, or the pupil gaze point may be located outside the first electronic device.
In this case, the audio data played by the first electronic device sounds coming from a direction other than the front direction.
The spatial state information of the first electronic device corresponds to a front of the user, and the spatial state information of the target second electronic device corresponds to a non-front of the user, similar to the above features, and is not repeated here.
It should be noted that, in a case that the actual positions of the first electronic device and the second electronic devices are not changed, when the spatial position information of the first electronic device in the preset sphere is changed, the spatial position information of each second electronic device in the preset sphere is changed accordingly, and reference may be made to fig. 4A and 4B.
Fig. 4A is a schematic view of a first scenario of an audio playing method according to an embodiment of the present application; fig. 4B is a schematic view of a second scenario of an audio playing method according to an embodiment of the present application.
As shown in fig. 4A, in a first scenario, a first electronic device 401 is located directly in front of the user and a second electronic device 402 is located on the right hand side of the user. As shown in fig. 4B, in the second scenario, the first electronic device 401 is located on the left side of the user and the second electronic device 402 is located directly in front of the user.
For example, in a first scenario, a user is looking at a first electronic device 401 directly in front while listening to audio data played by the first electronic device 401 and audio data played by a second electronic device on the right. Next, in a second scenario, the user turns his head to look at the second electronic device 402 while listening to the audio data played by the first electronic device 401 and the audio data played by the second electronic device on the right side. The actual positions of the first electronic device 401 and the second electronic device 402 are not changed, the position of the pupil gaze point of the user is changed, and when the second electronic device 402 is positioned right in front of the user, the first electronic device 401 is positioned on the left side of the user.
Optionally, the obtaining of the control instruction of the spatial state information includes: and acquiring a first detection result of the first electronic device aiming at the pupil fixation point and a second detection result of each second electronic device aiming at the pupil fixation point.
A sensor having a pupil fixation point detection capability may be provided on the first electronic device and each of the second electronic devices. Each second electronic device may transmit the obtained second detection result to the first electronic device after performing pupil gaze point detection by the sensor.
At the same time point, the pupil fixation point may exist in the first electronic device or in a second electronic device.
The first detection result may include whether the pupil gaze point is detected or not, and may also include position information of the pupil gaze point on the first electronic device.
The second detection result may include whether the pupil gaze point is detected or not, and may also include position information of the pupil gaze point on the second electronic device.
And step 104, determining the space state information of the first electronic device and each second electronic device according to the control instruction.
Optionally, determining, according to the control instruction, spatial state information of the first electronic device and each of the second electronic devices includes: determining a target position watched by the pupil according to the first detection result and the at least one second detection result; and determining the space state information of the first electronic equipment and each second electronic equipment according to the target position.
If the first detection result may indicate that the pupil gaze point and the position information of the pupil gaze point are detected on the first electronic device, and the second detection result may indicate that the pupil gaze point is not detected on the second electronic device, it may be determined that the target position gazed by the pupil is on the first electronic device, and the position information of the target position is the position information of the pupil gaze point.
If the second detection result may indicate that the pupil fixation point and the position information of the pupil fixation point are detected on the second electronic device, and the first detection result may indicate that the pupil fixation point is not detected on the first electronic device, it may be determined that the target position that is fixated by the pupil is located on the second electronic device, and the position information of the target position is the position information of the pupil fixation point.
According to the position information of the pupil fixation point, the space state information of the first electronic equipment and each second electronic equipment can be determined
Optionally, determining the spatial state information of the first electronic device and each second electronic device according to the target location includes: determining a target electronic device watched by the pupil and at least one non-target electronic device not watched in the first electronic device and the at least one second electronic device according to the target position; determining space state information of the target electronic equipment according to the target position; and determining the space state information of each non-target electronic device according to a preset space state information set and the space state information of the target electronic device.
For example, the first electronic device is connected to two second electronic devices, which are the second electronic device 1 and the second electronic device 2, respectively. If the target position is on the first electronic device, the first electronic device is determined as a target electronic device watched by the pupil, and the second electronic device 1 and the second electronic device 2 are determined as non-target electronic devices not watched. If the target position is on the second electronic device 2, the second electronic device 2 is determined as a target electronic device gazed by the pupil, and the second electronic device 1 and the first electronic device are determined as non-target electronic devices not gazed at.
According to the target position, determining the spatial state information of the target electronic device, which can be understood as making the position information of the pupil fixation point correspond to the position in the preset sphere corresponding to the position right in front of the user.
The preset set of spatial state information may correspond to a preset spatial audio state. The spatial state information of each non-target electronic device is determined according to a preset spatial state information set and the spatial state information of the target electronic device, and it can be understood that the relative positions of the first electronic device and each second electronic device in a preset sphere are fixed, so that the spatial state information of each non-target electronic device can be determined under the condition that the position information of the pupil fixation point corresponds to the position in the preset sphere corresponding to the front of the user.
Fig. 5 is a schematic diagram of a third scenario of an audio playing method according to an embodiment of the present application.
As shown in fig. 5, first, a sensor detects that a pupil fixation point 503 of a user is located on a first electronic device 501, the user focuses on the first electronic device 501, and at this time, the first electronic device 501 is located right in front of the user, spatial position information of the first electronic device 501 in a preset sphere corresponds to right in front of the user, and the volume of the first electronic device 501 is a preset value. The second electronic device 502 is located at the right hand side of the user, the spatial position information of the second electronic device 502 in the preset sphere corresponds to the right side of the user, and the volume of the second electronic device 502 is slightly smaller than that of the first electronic device 501. At this time, the audio data played by the first electronic device 501 is used as a main audio data, and the audio data played by the second electronic device 502 is used as an auxiliary audio data.
Second, the user's pupil fixation point 503 is moved out of the edge of the screen of the first electronic device. The volume level of first electronic device 501 gradually becomes smaller as pupil point of regard 503 moves out of the screen. The spatial position information of the first electronic device 501 in the preset sphere gradually moves to the left as the pupil fixation point 503 moves, and the direction tends to the left hand of the user.
Next, the pupil gaze point 503 of the user starts to enter the screen edge of the second electronic device 502. The volume of the second electronic device 502 gradually increases as the pupil gaze point 503 enters the screen. The spatial position information of the second electronic device in the preset sphere gradually moves to the left as the pupil fixation point 503 moves, and the direction of the spatial position information tends to be right in front of the user.
Finally, pupil fixation point 503 is located at second electronic device 502. The volume of the second electronic device 502 gradually increases until the volume increases to a preset value, and the spatial position information of the second electronic device 502 in the preset sphere corresponds to the front of the user. The volume of the first electronic device 501 gradually decreases, and the spatial position information of the first electronic device in the preset sphere corresponds to the left hand of the user. At this time, the audio data played by the first electronic device 501 is mainly the audio data played by the second electronic device 502.
Optionally, determining the spatial state information of the first electronic device and each second electronic device according to the target location includes: determining a target electronic device watched by the pupil and at least one non-target electronic device not watched in the first electronic device and the at least one second electronic device according to the target position; determining a corresponding space state information combination from a plurality of preset alternative space state information combinations according to the target electronic equipment; the spatial state information combination corresponding to the target electronic device comprises the spatial state information of the target electronic device and the spatial state information of each non-target electronic device.
The preset multiple candidate spatial state information combinations may be spatial state information combinations corresponding to multiple preset spatial audio states. For example, the candidate spatial state information combination may include spatial state information combination 1 corresponding to the first electronic device directly in front of the user and spatial state information combination 2 corresponding to the second electronic device directly in front of the user, and in a case where the target electronic device is the first electronic device, the spatial state information combination 1 may be determined according to the first electronic device, and the respective spatial state information corresponding to the spatial state combination 1 may be determined as the spatial state information of the first electronic device and each of the second electronic devices.
And 106, performing spatial mixing processing on the audio data played by the first electronic device and the audio data played by each second audio data according to each piece of spatial state information to obtain first target audio data.
The spatial mixing process may be a process of mixing audio data from respective sound sources in the presence of at least two sound sources at different positions so that each audio data sounds as if it were transmitted from the corresponding sound source, rather than from the same direction, when a plurality of audio data mixed by an audio playing device is played. In this process, although the mixing operation is performed on the plurality of audio data, the mixing operation may be to create the plurality of audio data into one audio file, and each audio file is still heard independently and is respectively from a corresponding sound source when being played, and the plurality of sound sources are located in different directions of the user. For example, the spatial position information of the first electronic device corresponds to the position right in front of the user, the value of the volume information of the first electronic device is x, the spatial position information of the second electronic device corresponds to the left side of the user, and the value of the volume information of the second electronic device is y, wherein x > y. Then, after the spatial audio mixing processing is performed on the first audio data and the second audio data, the first audio data sounds to come from the front of the user with the volume x, and the second audio data sounds to come from the left of the user with the volume y.
Through spatial audio mixing processing, audio experience with definite azimuth can be obtained, each audio data can be heard from a corresponding sound source, each sound source can be located in different directions, and various audio parameters such as the volume of each audio data can also be flexibly set respectively, so that the effect that a plurality of audio playing devices located in different directions of a user can play corresponding audio data simultaneously as the sound source only through one audio playing device to play the audio data after the spatial audio mixing processing can be realized.
And step 108, sending the first target audio data to an audio playing device for playing.
The first electronic device may send the first target audio data to the headset so that the first target audio data is played through the headset. For example, the spatial state information of the first electronic device corresponds to the front of the user, and the spatial state information of the second electronic device corresponds to the left of the user, so that the user can hear the first audio data coming from the front and having a larger volume, and can hear the second audio data coming from the left and having a smaller volume when listening to the first target audio data through the earphone.
Optionally, the audio playing method further includes: under the condition that the audio data played by the first electronic equipment is replaced, according to each piece of spatial state information, performing spatial mixing processing on the replaced audio data played by the first electronic equipment and the audio data played by each piece of second audio data to obtain second target audio data, and sending the second target audio data to the audio playing equipment for playing; or, under the condition that the audio data played by each second electronic device is replaced, according to each piece of spatial state information, performing spatial mixing processing on the audio data played by the first electronic device and the replaced audio data played by each second audio data to obtain third target audio data, and sending the third target audio data to the audio playing device for playing.
In the embodiment of the audio playing method shown in fig. 1, a control instruction of the spatial state information is obtained; determining space state information of the first electronic equipment and each second electronic equipment according to the control instruction; according to each piece of spatial state information, performing spatial mixing processing on the audio data played by the first electronic equipment and the audio data played by each piece of second audio data to obtain first target audio data; and sending the first target audio data to the audio playing device for playing. Through the technical scheme of the embodiment of the application, the spatial sound mixing processing can be carried out according to the spatial state information respectively corresponding to the first electronic equipment and the at least one second electronic equipment, and the phenomenon that a user generates a sense of disorientation when listening to different audio data played by a plurality of electronic equipment positioned at different positions at the same time is avoided.
Based on the same technical concept, the present application further provides an embodiment of an audio playing method, as shown in fig. 4. Fig. 6 is a second flowchart illustrating an audio playing method according to an embodiment of the present application. The audio playing device in this embodiment may be a headphone.
Referring to fig. 6, in step 602, a first electronic device and a second electronic device are connected to each other, and a headset is connected to the first electronic device.
Step 604, the second electronic device transmits the audio data to the first electronic device.
In step 606, the user sets spatial location information according to the actual location state or preference of the device.
In step 608, the user sets the volume levels of different scenes, and the user can configure various volume states.
Step 610, performing spatial mixing processing on the audio data played by the first electronic device and the second electronic device, and outputting the audio data to an earphone.
After step 610, at least one of step 612, step 618, and step 620 may be performed.
Step 612, detecting the pupil fixation point.
And step 614, whether the target electronic equipment is switched.
If yes, go back to step 608; if not, go to step 606.
Step 616, the original output state is maintained.
At step 618, it is determined whether the audio data is replaced.
If yes, go back to step 606; if not, go to step 616.
Step 620, determine whether the volume status is switched.
If yes, go back to step 608; if not, go to step 616.
The audio playing method provided in the embodiment shown in fig. 6 can implement each process implemented in the foregoing audio playing method embodiment, and is not described herein again to avoid repetition.
It should be noted that, in the audio playing method provided in the embodiment of the present application, the execution main body may be an audio playing device, or a control module in the audio playing device for executing the audio playing method. The audio playing device provided in the embodiment of the present application is described with a method for executing audio playing by an audio playing device as an example.
Fig. 7 is a schematic structural diagram of an audio playing apparatus according to an embodiment of the present application.
Referring to fig. 7, an audio playing apparatus is applied to a first electronic device, the first electronic device is connected to at least one second electronic device, the first electronic device is connected to an audio playing device, and the audio playing apparatus includes:
an obtaining module 701, configured to obtain a control instruction of the spatial state information;
a determining module 702, configured to determine, according to the control instruction, spatial state information of the first electronic device and each of the second electronic devices;
the processing module 703 is configured to perform spatial audio mixing processing on the audio data played by the first electronic device and the audio data played by each second audio data according to each piece of spatial state information, so as to obtain first target audio data;
a sending module 704, configured to send the first target audio data to an audio playing device for playing.
Optionally, the first electronic device is a master tone device; each second electronic device is a consonant device; an acquisition module specifically configured to:
acquiring a first main and auxiliary switching instruction; the first main and auxiliary switching instruction is used for changing the space state information of the first electronic equipment and each second electronic equipment, so that the space state information of the selected second electronic equipment corresponds to the front of the user, and the space state information of the first electronic equipment corresponds to the non-front of the user;
or, the at least one second electronic device comprises a target second electronic device; the target second electronic equipment is a primary sound equipment, and the first electronic equipment is a consonant equipment; an acquisition module specifically configured to:
acquiring a second main and auxiliary switching instruction; the second switching instruction is used for changing the spatial state information of the first electronic device and the target second electronic device, so that the spatial state information of the first electronic device corresponds to the front of the user, and the spatial state information of the target second electronic device corresponds to the non-front of the user.
Optionally, the obtaining module 701 is specifically configured to:
and acquiring a first detection result of the first electronic device aiming at the pupil fixation point and a second detection result of each second electronic device aiming at the pupil fixation point.
Optionally, the determining module 702 includes:
a first determining unit, configured to determine, according to the first detection result and the at least one second detection result, a target position gazed by the pupil;
and the second determining unit is used for determining the space state information of the first electronic equipment and each second electronic equipment according to the target position.
Optionally, the second determining unit is specifically configured to:
determining a target electronic device watched by the pupil and at least one non-target electronic device not watched in the first electronic device and the at least one second electronic device according to the target position;
determining space state information of the target electronic equipment according to the target position;
and determining the space state information of each non-target electronic device according to a preset space state information set and the space state information of the target electronic device.
Optionally, the second determining unit is specifically configured to:
determining a target electronic device watched by the pupil and at least one non-target electronic device not watched in the first electronic device and the at least one second electronic device according to the target position;
determining a corresponding space state information combination from a plurality of preset alternative space state information combinations according to the target electronic equipment; the spatial state information combination corresponding to the target electronic device comprises the spatial state information of the target electronic device and the spatial state information of each non-target electronic device.
Optionally, the audio playing apparatus further includes:
the audio mixing module is used for carrying out spatial audio mixing processing on the replaced audio data played by the first electronic equipment and the audio data played by each second audio data according to each spatial state information under the condition that the audio data played by the first electronic equipment is replaced, obtaining second target audio data, and sending the second target audio data to the audio playing equipment for playing; or, under the condition that the audio data played by each second electronic device is replaced, according to each piece of spatial state information, performing spatial mixing processing on the audio data played by the first electronic device and the replaced audio data played by each second audio data to obtain third target audio data, and sending the third target audio data to the audio playing device for playing.
Optionally, the spatial state information includes spatial position information and audio parameters, and the obtaining module 701 includes:
a first receiving unit, configured to receive a first control instruction for spatial location information;
and/or the presence of a gas in the gas,
and the second receiving unit is used for receiving a second control instruction aiming at the audio parameter.
Optionally, the first receiving unit is specifically configured to:
in the first electronic device and at least one second electronic device, aiming at any one electronic device, on a user interaction interface, determining position information of a virtual sound source corresponding to the electronic device in a preset sphere as space position information of the electronic device;
receiving a position adjusting instruction of a virtual sound source; the position adjusting instruction is used for adjusting the position information of the virtual sound source in the preset sphere.
Optionally, the audio parameter comprises volume information; the second receiving unit is specifically configured to:
in the first electronic device and the at least one second electronic device, a volume adjusting instruction for any one electronic device is received on a user interaction interface.
The audio playing device provided by the embodiment of the application acquires a control instruction of the space state information; determining space state information of the first electronic equipment and each second electronic equipment according to the control instruction; according to each piece of spatial state information, performing spatial mixing processing on the audio data played by the first electronic equipment and the audio data played by each piece of second audio data to obtain first target audio data; and sending the first target audio data to the audio playing device for playing. Through the technical scheme of the embodiment of the application, the spatial sound mixing processing can be performed according to the spatial state information respectively corresponding to the first electronic device and the at least one second electronic device, and the phenomenon that a user generates a sense of disorientation when listening to different audio data played by a plurality of electronic devices located at different positions at the same time is avoided.
The audio playing device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The audio playing device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The audio playing device provided in the embodiment of the present application can implement each process implemented in the foregoing audio playing method embodiment, and is not described here again to avoid repetition.
Optionally, as shown in fig. 8, an electronic device 800 is further provided in this embodiment of the present application, and includes a processor 801, a memory 802, and a program or an instruction stored in the memory 802 and executable on the processor 801, where the program or the instruction is executed by the processor 801 to implement each process of the foregoing audio playing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic device and the non-mobile electronic device described above.
Fig. 9 is a schematic hardware structure diagram of an electronic device implementing an embodiment of the present application.
The electronic device 900 includes, but is not limited to: a radio frequency unit 901, a network module 902, an audio output unit 903, an input unit 904, a sensor 905, a display unit 906, a user input unit 907, an interface unit 908, a memory 909, and a processor 910.
Those skilled in the art will appreciate that the electronic device 900 may further include a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 910 through a power management system, so as to manage charging, discharging, and power consumption management functions through the power management system. The electronic device structure shown in fig. 9 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
The processor 910 is configured to obtain a control instruction of the spatial state information;
determining space state information of the first electronic equipment and each second electronic equipment according to the control instruction;
according to each piece of spatial state information, performing spatial mixing processing on the audio data played by the first electronic equipment and the audio data played by each piece of second audio data to obtain first target audio data;
and sending the first target audio data to the audio playing device for playing.
In the embodiment of the application, a control instruction of space state information is acquired; determining space state information of the first electronic equipment and each second electronic equipment according to the control instruction; according to each piece of spatial state information, performing spatial mixing processing on the audio data played by the first electronic equipment and the audio data played by each piece of second audio data to obtain first target audio data; and sending the first target audio data to the audio playing device for playing. Through the technical scheme of the embodiment of the application, the spatial sound mixing processing can be carried out according to the spatial state information respectively corresponding to the first electronic equipment and the at least one second electronic equipment, and the phenomenon that a user generates a sense of disorientation when listening to different audio data played by a plurality of electronic equipment positioned at different positions at the same time is avoided.
Optionally, the first electronic device is a master tone device located directly in front of the user; each second electronic device is a consonant device positioned in a position other than the front of the user; a processor 910 configured to:
the control instruction for acquiring the space state information comprises the following steps:
acquiring a first main and auxiliary switching instruction; the first main and auxiliary switching instruction is used for changing the space state information of the first electronic equipment and each second electronic equipment, so that the space state information of the selected second electronic equipment corresponds to the front of the user, and the space state information of the first electronic equipment corresponds to the non-front of the user;
or, the at least one second electronic device comprises a target second electronic device; the target second electronic equipment is a primary sound equipment, and the first electronic equipment is a consonant equipment; the control instruction for acquiring the space state information comprises the following steps:
acquiring a second main and auxiliary switching instruction; the second switching instruction is used for changing the spatial state information of the first electronic device and the target second electronic device, so that the spatial state information of the first electronic device corresponds to the front of the user, and the spatial state information of the target second electronic device corresponds to the non-front of the user.
Optionally, the processor 910 is further configured to:
the control instruction for acquiring the space state information comprises the following steps:
and acquiring a first detection result of the first electronic device aiming at the pupil fixation point and a second detection result of each second electronic device aiming at the pupil fixation point.
Optionally, the processor 910 is further configured to:
according to the control instruction, determining the space state information of the first electronic device and each second electronic device comprises the following steps:
determining a target position watched by the pupil according to the first detection result and the at least one second detection result;
and determining the space state information of the first electronic equipment and each second electronic equipment according to the target position.
Optionally, the processor 910 is further configured to:
according to the target position, determining the space state information of the first electronic device and each second electronic device comprises the following steps:
determining a target electronic device watched by the pupil and at least one non-target electronic device not watched in the first electronic device and the at least one second electronic device according to the target position;
determining space state information of the target electronic equipment according to the target position;
and determining the space state information of each non-target electronic device according to a preset space state information set and the space state information of the target electronic device.
Optionally, the processor 910 is further configured to:
according to the target position, determining the space state information of the first electronic device and each second electronic device comprises the following steps:
determining a target electronic device watched by the pupil and at least one non-target electronic device not watched in the first electronic device and the at least one second electronic device according to the target position;
determining a corresponding space state information combination from a plurality of preset alternative space state information combinations according to the target electronic equipment; the combination of the spatial state information corresponding to the target electronic device includes the spatial state information of the target electronic device and the spatial state information of each non-target electronic device.
Optionally, the processor 910 is further configured to:
under the condition that the audio data played by the first electronic equipment is replaced, according to each piece of spatial state information, performing spatial mixing processing on the replaced audio data played by the first electronic equipment and the audio data played by each piece of second audio data to obtain second target audio data, and sending the second target audio data to the audio playing equipment for playing;
alternatively, the first and second electrodes may be,
and under the condition that the audio data played by each second electronic device is replaced, according to each piece of spatial state information, performing spatial mixing processing on the audio data played by the first electronic device and the replaced audio data played by each second audio data to obtain third target audio data, and sending the third target audio data to the audio playing device for playing.
Optionally, the processor 910 is further configured to:
the spatial state information includes spatial position information and audio parameters, and the control instruction for acquiring the spatial state information includes:
receiving a first control instruction aiming at the spatial position information;
and/or the presence of a gas in the gas,
a second control instruction for the audio parameter is received.
Optionally, the processor 910 is further configured to:
receiving a first control instruction for spatial position information, comprising:
in the first electronic device and at least one second electronic device, aiming at any one electronic device, on a user interaction interface, determining position information of a virtual sound source corresponding to the electronic device in a preset sphere as space position information of the electronic device;
receiving a position adjusting instruction of a virtual sound source; the position adjusting instruction is used for adjusting the position information of the virtual sound source in the preset sphere.
Optionally, the audio parameter comprises volume information; processor 910, further configured to:
receiving a second control instruction for the audio parameter, comprising:
in the first electronic equipment and the at least one second electronic equipment, aiming at any one electronic equipment, receiving a volume adjusting instruction aiming at the electronic equipment on a user interactive interface.
In the embodiment of the application, switching between the consonant equipment and the consonant equipment can be flexibly performed in the first electronic equipment and the at least one second electronic equipment through the first main-auxiliary switching instruction and the second main-auxiliary switching instruction; according to the pupil fixation point detection, the spatial position information and the spatial volume information can be freely and flexibly controlled, and an audio playing effect most suitable for a user is provided for the user in the process that the user naturally shifts the sight; by receiving the position adjusting instruction of the virtual sound source and the volume adjusting instruction aiming at the electronic equipment on the user interaction interface, the spatial state information of each electronic equipment can be flexibly set, and the auditory effect is enriched.
It should be understood that, in the embodiment of the present application, the input Unit 904 may include a Graphics Processing Unit (GPU) 9041 and a microphone 9042, and the Graphics processor 9041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 906 may include a display panel 9061, and the display panel 9061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 907 includes a touch panel 9071 and other input devices 9072. A touch panel 9071, also called a touch screen. The touch panel 9071 may include two parts, a touch detection device and a touch controller. Other input devices 9072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. Memory 909 can be used to store software programs as well as various data including, but not limited to, application programs and operating systems. The processor 910 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It is to be appreciated that the modem processor described above may not be integrated into processor 910.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above-mentioned audio playing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above-mentioned audio playing method embodiment, and can achieve the same technical effect, and is not described here again to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one, second electronic device 2, …" does not exclude the presence of another, same element in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the present embodiments are not limited to those precise embodiments, which are intended to be illustrative rather than restrictive, and that various changes and modifications may be effected therein by one skilled in the art without departing from the scope of the appended claims.

Claims (12)

1. An audio playing method is applied to a first electronic device, the first electronic device is connected with at least one second electronic device, the first electronic device is connected with an audio playing device, and the audio playing method is characterized by comprising the following steps:
acquiring a control instruction of space state information;
determining space state information of the first electronic equipment and each second electronic equipment according to the control instruction;
according to each piece of spatial state information, performing spatial mixing processing on the audio data played by the first electronic equipment and the audio data played by each piece of second audio data to obtain first target audio data;
and sending the first target audio data to the audio playing device for playing.
2. The method of claim 1, wherein the first electronic device is a voiceover device located directly in front of a user; each second electronic device is a consonant device located not directly in front of the user; the control instruction for acquiring the space state information comprises the following steps:
acquiring a first main and auxiliary switching instruction; the first main and auxiliary switching instruction is used for changing the space state information of the first electronic equipment and each second electronic equipment, so that the space state information of the selected second electronic equipment corresponds to the front of the user, and the space state information of the first electronic equipment corresponds to the non-front of the user;
or, the at least one second electronic device comprises a target second electronic device; the target second electronic device is the consonant device, and the first electronic device is the consonant device; the control instruction for acquiring the space state information comprises the following steps:
acquiring a second main and auxiliary switching instruction; the second switching instruction is used for changing the spatial state information of the first electronic device and the target second electronic device, so that the spatial state information of the first electronic device corresponds to the front of the user, and the spatial state information of the target second electronic device corresponds to the non-front of the user.
3. The method of claim 1, wherein the control instruction for obtaining the spatial state information comprises:
and acquiring a first detection result of the first electronic device for the pupil fixation point and a second detection result of each second electronic device for the pupil fixation point.
4. The method according to claim 3, wherein the determining spatial state information of the first electronic device and each of the second electronic devices according to the control instruction comprises:
determining a target position watched by the pupil according to the first detection result and at least one second detection result;
and determining the space state information of the first electronic equipment and each second electronic equipment according to the target position.
5. The method of claim 4, wherein the determining spatial state information of the first electronic device and each of the second electronic devices according to the target location comprises:
determining, in the first electronic device and the at least one second electronic device, a target electronic device gazed by a pupil and at least one non-target electronic device not gazed according to the target location;
determining space state information of the target electronic equipment according to the target position;
and determining the space state information of each non-target electronic device according to a preset space state information set and the space state information of the target electronic device.
6. The method of claim 4, wherein the determining spatial state information of the first electronic device and each of the second electronic devices according to the target location comprises:
determining, in the first electronic device and the at least one second electronic device, a target electronic device gazed by a pupil and at least one non-target electronic device not gazed according to the target location;
determining a corresponding space state information combination from a plurality of preset alternative space state information combinations according to the target electronic equipment; the spatial state information combination corresponding to the target electronic device comprises the spatial state information of the target electronic device and the spatial state information of each non-target electronic device.
7. The method of claim 1, further comprising:
under the condition that the audio data played by the first electronic equipment is replaced, according to each piece of spatial state information, performing spatial mixing processing on the replaced audio data played by the first electronic equipment and the audio data played by each piece of second audio data to obtain second target audio data, and sending the second target audio data to the audio playing equipment for playing;
alternatively, the first and second electrodes may be,
and under the condition that the audio data played by each second electronic device is replaced, according to each piece of spatial state information, performing spatial mixing processing on the audio data played by the first electronic device and the replaced audio data played by each second audio data to obtain third target audio data, and sending the third target audio data to the audio playing device for playing.
8. The method of claim 1, wherein the spatial state information comprises the spatial position information and audio parameters, and the control instruction for obtaining spatial state information comprises:
receiving a first control instruction aiming at the spatial position information;
and/or the presence of a gas in the gas,
receiving a second control instruction for the audio parameter.
9. The method of claim 8, wherein receiving the first control instruction for the spatial location information comprises:
in the first electronic device and the at least one second electronic device, for any one electronic device, on a user interaction interface, determining position information of a virtual sound source corresponding to the electronic device in a preset sphere as spatial position information of the electronic device;
receiving a position adjustment instruction of the virtual sound source; the position adjusting instruction is used for adjusting the position information of the virtual sound source in the preset sphere.
10. The method of claim 8, wherein the audio parameter comprises volume information; the receiving a second control instruction for the audio parameter includes:
receiving, on a user interaction interface, a volume adjustment instruction for any one of the first electronic device and the at least one second electronic device.
11. An audio playing device is applied to a first electronic device, the first electronic device is connected with at least one second electronic device, the first electronic device is connected with an audio playing device, and the audio playing device comprises:
the acquisition module is used for acquiring a control instruction of the space state information;
the determining module is used for determining the space state information of the first electronic equipment and each second electronic equipment according to the control instruction;
the processing module is used for performing spatial mixing processing on the audio data played by the first electronic device and the audio data played by each second audio data according to each piece of spatial state information to obtain first target audio data;
and the sending module is used for sending the first target audio data to the audio playing equipment for playing.
12. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the audio playback method of any of claims 1-10.
CN202210225832.8A 2022-03-07 2022-03-07 Audio playing method and electronic equipment Pending CN114650496A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210225832.8A CN114650496A (en) 2022-03-07 2022-03-07 Audio playing method and electronic equipment
PCT/CN2023/079874 WO2023169367A1 (en) 2022-03-07 2023-03-06 Audio playing method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210225832.8A CN114650496A (en) 2022-03-07 2022-03-07 Audio playing method and electronic equipment

Publications (1)

Publication Number Publication Date
CN114650496A true CN114650496A (en) 2022-06-21

Family

ID=81993315

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210225832.8A Pending CN114650496A (en) 2022-03-07 2022-03-07 Audio playing method and electronic equipment

Country Status (2)

Country Link
CN (1) CN114650496A (en)
WO (1) WO2023169367A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023169367A1 (en) * 2022-03-07 2023-09-14 维沃移动通信有限公司 Audio playing method and electronic device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7876903B2 (en) * 2006-07-07 2011-01-25 Harris Corporation Method and apparatus for creating a multi-dimensional communication space for use in a binaural audio system
US10038957B2 (en) * 2013-03-19 2018-07-31 Nokia Technologies Oy Audio mixing based upon playing device location
JP2021090156A (en) * 2019-12-04 2021-06-10 ローランド株式会社 headphone
CN113890932A (en) * 2020-07-02 2022-01-04 华为技术有限公司 Audio control method and system and electronic equipment
CN112581932A (en) * 2020-11-26 2021-03-30 交通运输部南海航海保障中心广州通信中心 Wired and wireless sound mixing system based on DSP
CN113823250B (en) * 2021-11-25 2022-02-22 广州酷狗计算机科技有限公司 Audio playing method, device, terminal and storage medium
CN114650496A (en) * 2022-03-07 2022-06-21 维沃移动通信有限公司 Audio playing method and electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023169367A1 (en) * 2022-03-07 2023-09-14 维沃移动通信有限公司 Audio playing method and electronic device

Also Published As

Publication number Publication date
WO2023169367A1 (en) 2023-09-14

Similar Documents

Publication Publication Date Title
CN110719529B (en) Multi-channel video synchronization method, device, storage medium and terminal
CN103002376A (en) Method for orientationally transmitting voice and electronic equipment
US20170192741A1 (en) Method, System, and Computer Storage Medium for Voice Control of a Split-Screen Terminal
CN110809226A (en) Audio playing method and electronic equipment
CN112764710A (en) Audio playing mode switching method and device, electronic equipment and storage medium
WO2022268024A1 (en) Video playback method and apparatus, and electronic device
CN112394901A (en) Audio output mode adjusting method and device and electronic equipment
WO2023169367A1 (en) Audio playing method and electronic device
CN112309449A (en) Audio recording method and device
WO2023246166A1 (en) Method and apparatus for adjusting video progress, and computer device and storage medium
CN112291672A (en) Speaker control method, control device and electronic equipment
CN108882112B (en) Audio playing control method and device, storage medium and terminal equipment
Marentakis et al. A comparison of feedback cues for enhancing pointing efficiency in interaction with spatial audio displays
CN114520950B (en) Audio output method, device, electronic equipment and readable storage medium
CN113115179B (en) Working state adjusting method and device
CN113840033B (en) Audio data playing method and device
CN113992786A (en) Audio playing method and device
CN112788489B (en) Control method and device and electronic equipment
CN113793625A (en) Audio playing method and device
CN113407147A (en) Audio playing method, device, equipment and storage medium
CN111176605A (en) Audio output method and electronic equipment
CN113746982B (en) Audio playing control method and device, electronic equipment and readable storage medium
CN104423871A (en) Information processing method and electronic device
CN115348240B (en) Voice call method, device, electronic equipment and storage medium for sharing document
CN113038333B (en) Bluetooth headset control method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination