CN114257920A - Audio playing method and system and electronic equipment - Google Patents
Audio playing method and system and electronic equipment Download PDFInfo
- Publication number
- CN114257920A CN114257920A CN202210174200.3A CN202210174200A CN114257920A CN 114257920 A CN114257920 A CN 114257920A CN 202210174200 A CN202210174200 A CN 202210174200A CN 114257920 A CN114257920 A CN 114257920A
- Authority
- CN
- China
- Prior art keywords
- electronic device
- relative position
- wearable device
- wearable
- audio data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/20—Arrangements for obtaining desired frequency or directional characteristics
- H04R1/32—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
- H04R1/34—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by using a single transducer with sound reflecting, diffracting, directing or guiding means
- H04R1/345—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by using a single transducer with sound reflecting, diffracting, directing or guiding means for loudspeakers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
- H04S7/302—Electronic adaptation of stereophonic sound system to listener position or orientation
- H04S7/303—Tracking of listener position or orientation
- H04S7/304—For headphones
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Otolaryngology (AREA)
- Stereophonic System (AREA)
Abstract
In the technical scheme of the audio playing method, the audio playing system and the electronic device provided by the embodiment of the invention, the first electronic device is connected with wearable equipment worn by a user, the wearable equipment has a spatial audio function, the first electronic device sends audio data to the wearable equipment, the first electronic device detects a first relative position between the wearable equipment and the first electronic device, the first relative position comprises a first relative distance and a first relative direction between the wearable equipment and the first electronic device, whether the first relative position changes or not is judged, and if the first relative position changes, the spatial audio output of the wearable equipment is adjusted according to the change of the first relative position. According to the embodiment of the invention, when only the electronic equipment is moved but the head of the user is not rotated, the spatial audio output by the wearable equipment can be correspondingly changed.
Description
Technical Field
The present invention relates to the field of computer technologies, and in particular, to an audio playing method, system and electronic device.
Background
When a user wears a wearable device (such as an earphone) to listen to music or watch videos from an electronic device, when the user twists his head, the wearable device detects that the position of the user's head relative to a virtual sound source (i.e., the electronic device) changes through a motion sensor, and the output spatial audio is adjusted accordingly. However, when the electronic device is moved without rotating the head of the user, the spatial audio output by the wearable device does not change correspondingly.
Disclosure of Invention
In view of this, embodiments of the present invention provide an audio playing method, system and electronic device, where when only the electronic device is moved but the head of the user is not rotated, the spatial audio output by the wearable device can be changed accordingly.
In a first aspect, an embodiment of the present invention provides an audio playing method, which is applied to a first electronic device, where the first electronic device is connected to a wearable device worn by a user, the wearable device has a spatial audio function, and the first electronic device is configured to send audio data to the wearable device, and the method includes:
detecting a first relative position of the wearable device and the first electronic device, the first relative position including a first relative distance and a first relative direction between the wearable device and the first electronic device;
judging whether the first relative position changes or not;
and if the first relative position is judged to be changed, adjusting the spatial audio output by the wearable device according to the change of the first relative position.
In the embodiment of the invention, if the first relative distance or the first relative direction between the wearable device and the first electronic device is changed, the spatial audio output by the wearable device is adjusted, so that when only the first electronic device is moved but the head of the user is not rotated, the spatial audio output by the wearable device can be correspondingly changed.
With reference to the first aspect, in certain implementations of the first aspect, the first electronic device includes a first sensor, and the detecting a first relative position of the wearable device and the first electronic device includes:
detecting the first relative position by the first sensor.
With reference to the first aspect, in certain implementations of the first aspect, the adjusting spatial audio output by the wearable device according to the change in the first relative position includes:
adjusting the audio data according to the change of the first relative position to obtain first adjusted audio data;
and adjusting the spatial audio output by the wearable device by sending the first adjusted audio data to the wearable device.
In the embodiment of the invention, according to the change of the first relative position, the first electronic device adjusts the audio data and then sends the adjusted audio data to the wearable device, and the wearable device directly outputs the adjusted audio data, so that the effect of adjusting the spatial audio output by the wearable device is achieved.
With reference to the first aspect, in certain implementations of the first aspect, when the first electronic device is connected to a second electronic device, and the first electronic device transmits video data to the second electronic device, the method further includes:
receiving a second relative position sent by the second electronic device, wherein the second relative position comprises a second relative distance and a second relative direction between the wearable device and the second electronic device;
judging whether the second relative position changes;
and if the second relative position is judged to be changed, adjusting the spatial audio output by the wearable device according to the change of the second relative position.
In the screen projection scene, the first electronic device receives a second relative position between the wearable device and the second electronic device, which is sent by the second electronic device, and adjusts the spatial audio output by the wearable device according to the change of the second relative position, so that the virtual sound source is switched from the first electronic device to the second electronic device.
With reference to the first aspect, in certain implementations of the first aspect, when the first electronic device is connected to a second electronic device, and the first electronic device transmits video data and audio data to the second electronic device, the method further includes:
receiving second adjusted audio data sent by the second electronic device, where the second adjusted audio data includes audio data generated by the second electronic device according to the audio data and the change of the second relative position;
and adjusting the spatial audio output by the wearable device according to the second adjusted audio data.
In the screen projection scene, the first electronic device receives the second adjusted audio data sent by the second electronic device, and adjusts the spatial audio output by the wearable device according to the second adjusted audio data, so that the virtual sound source is switched from the first electronic device to the second electronic device.
With reference to the first aspect, in certain implementations of the first aspect, when the first electronic device is connected to a second electronic device, and the first electronic device transmits video data and audio data to the second electronic device, the method further includes:
disconnect from the wearable device to connect the second electronic device with the wearable device.
In the screen projection scene, the first electronic equipment is disconnected with the wearable equipment, so that the second electronic equipment is connected with the wearable equipment, and the virtual sound source is switched from the first electronic equipment to the second electronic equipment.
With reference to the first aspect, in certain implementations of the first aspect, the first sensor comprises a UWB sensor or a camera.
With reference to the first aspect, in certain implementations of the first aspect, the wearable device includes a headset.
In a second aspect, an embodiment of the present invention provides an audio playing method, which is applied to a second electronic device, where the second electronic device is connected to a first electronic device, the first electronic device is connected to a wearable device worn by a user, the wearable device has a spatial audio function, and the second electronic device is configured to receive screen projection data sent by the first electronic device, where the method includes:
detecting a second relative position between the wearable device and the second electronic device, the second relative position including a second relative distance and a second relative direction between the wearable device and the second electronic device;
and adjusting the spatial audio output by the wearable device according to the second relative position.
With reference to the second aspect, in some implementations of the second aspect, when the screen projection data includes video data, the adjusting spatial audio output by the wearable device according to the second relative position includes:
controlling the first electronic device to adjust the spatial audio output by the wearable device by sending the second relative position to the first electronic device.
With reference to the second aspect, in some implementations of the second aspect, when the screen projection data includes video data and audio data, the adjusting spatial audio output by the wearable device according to the second relative position includes:
judging whether the second relative position changes;
if the second relative position is judged to be changed, second adjusted audio data are generated according to the audio data and the change of the second relative position;
and sending the second adjusted audio data to the first electronic equipment to control the first electronic equipment to adjust the spatial audio output by the wearable equipment.
With reference to the second aspect, in certain implementations of the second aspect, when the first electronic device is disconnected from the wearable device, the second electronic device is connected to the wearable device, and the screen projection data includes video data and audio data, the method further includes:
transmitting the audio data to the wearable device.
With reference to the second aspect, in some implementations of the second aspect, the adjusting spatial audio output by the wearable device according to the second relative position includes:
judging whether the second relative position changes;
and if the second relative position is judged to be changed, adjusting the spatial audio output by the wearable device according to the change of the second relative position.
With reference to the second aspect, in some implementations of the second aspect, the detecting a second relative position between the wearable device and the second electronic device includes:
if the second electronic device is judged to comprise the first sensor, detecting a second relative position between the wearable device and the second electronic device through the first sensor;
and if the second electronic equipment does not comprise the first sensor, detecting a second relative position between the wearable equipment and the second electronic equipment through a second sensor.
With reference to the second aspect, in certain implementations of the second aspect, the second sensor comprises a UWB sensor or a camera.
With reference to the second aspect, in certain implementations of the second aspect, the wearable device includes a headset.
In a third aspect, an embodiment of the present invention provides an audio playing system, a first electronic device and/or a second electronic device, where the first electronic device is configured to execute the method according to any one of the above first aspects, and the second electronic device is configured to execute the method according to any one of the above second aspects;
wearable equipment, wearable equipment has the space audio frequency function.
In a fourth aspect, an embodiment of the present invention provides an electronic device, including a processor and a memory, where the memory is used to store a computer program, and the computer program includes program instructions, and when the processor executes the program instructions, the electronic device is caused to execute the steps of the audio playing method according to any one of the above items.
In a fifth aspect, the present invention provides a computer-readable storage medium, in which a computer program is stored, the computer program including program instructions, which, when the program is requested to be executed by a computer, make the computer execute the method according to any one of the above.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
Fig. 1 is an architecture diagram of an audio playing system according to an embodiment of the present invention;
fig. 2 is an architecture diagram of another audio playing system according to an embodiment of the present invention;
fig. 3 is an architecture diagram of another audio playing system according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an array antenna based UWB positioning scheme;
FIG. 5 is a schematic diagram of a UWB sensor based measurement of a first relative position;
fig. 6 is a signaling interaction diagram of an audio playing method according to an embodiment of the present invention;
fig. 7 is a signaling interaction diagram of another audio playing method according to an embodiment of the present invention;
fig. 8 is a signaling interaction diagram of another audio playing method according to an embodiment of the present invention;
fig. 9 is a flowchart of an audio playing method according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a first electronic device according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of a second electronic device according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
For better understanding of the technical solutions of the present invention, the following detailed descriptions of the embodiments of the present invention are provided with reference to the accompanying drawings.
It should be understood that the described embodiments are only some embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be understood that the term "and/or" as used herein is merely one type of associative relationship that describes an associated object, meaning that three types of relationships may exist, e.g., A and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Some electronic devices and wearable devices (e.g., headphones) currently support spatial audio functionality, which is the simulation of traditional surround sound sounds in the hearing of headphones through directional audio filtering and fine tuning of the sound frequencies received by the ears of a user.
For example, when a user wears a headset to listen to music or watch a movie in a mobile phone, the headset detects the position change of the head of the user relative to a virtual sound source (i.e., the mobile phone) through a motion sensor when the user twists his head, and the spatial audio heard by the user from the headset changes accordingly. However, the head of the user does not rotate, and when the mobile phone is moved, the earphone cannot detect the position change of the head of the user relative to the virtual sound source (namely, the mobile phone) through the motion sensor, so that the spatial audio heard by the user from the earphone cannot be changed correspondingly.
Further, when the user projects the video from the first electronic device to the second electronic device, the virtual sound source of the spatial audio is not switched from the first electronic device to the second electronic device. For example: when a user projects a movie from a mobile phone to a television, the virtual sound source of the spatial audio is not switched from the mobile phone to the television.
Therefore, the user experiences little with the current audio playing based on the spatial audio function.
Based on the foregoing technical problem, an embodiment of the present invention provides an audio playing system. Fig. 1 is an architecture diagram of an audio playing system according to an embodiment of the present invention, fig. 2 is an architecture diagram of another audio playing system according to an embodiment of the present invention, and fig. 3 is an architecture diagram of another audio playing system according to an embodiment of the present invention.
As shown in fig. 1, the audio playing system includes a first electronic device 10 and a wearable device 20, and the first electronic device 10 and the wearable device 20 are connected by a wire or wirelessly. The first electronic device 10 is configured to transmit spatial audio data to the wearable device 20, and the wearable device 20 outputs spatial audio according to the audio data. Specifically, the wearable device includes a headphone 21, and the headphone 21 is configured to output spatial audio according to the audio data. The first electronic device 10 comprises a first sensor 11. The first sensor 11 is used to detect a first relative position between the first electronic device 10 and the wearable device 20, for example: the first sensor 11 includes an Ultra Wide Band (UWB) sensor or a camera. The first relative position includes a first relative distance and a first relative orientation between the first electronic device 10 and the wearable device 20. The first electronic device 10 is further configured to determine whether the first relative position changes, and adjust the spatial audio output by the wearable device 10 according to the change of the first relative position if it is determined that the first relative position changes. Specifically, the first electronic device 10 is configured to adjust the audio data according to the change of the first relative position to obtain first adjusted audio data, and then adjust the spatial audio output by the wearable device 20 by sending the first adjusted audio data to the wearable device 20.
In some embodiments, when the first sensor comprises a UWB sensor, UWB-based position detection techniques may be employed to detect a first relative position between the first electronic device 10 and the wearable device 20. Specifically, the UWB-based position detection technology may be a UWB positioning scheme based on an array antenna, and the positioning device may implement three-dimensional positioning of the positioning tag by a single device through the array antenna, as shown in fig. 4, which may be used for two devicesThe number N of the antennas of the antenna array positioned in the dimensional plane is 3, and the antennas are respectively named as a positioning antenna A, a positioning antenna B and a positioning antenna C, wherein the positioning antenna A has the function of receiving and transmitting the ranging frame, and the positioning antenna B and the positioning antenna C have the function of receiving the ranging frame; and a positioning antenna A, a positioning antenna B and a positioning antenna C which are arranged on the positioning base station are positioned on the same horizontal plane. And the positioning base station calculates the distance d from the positioning antenna A to the positioning antenna port of the positioning tag by using a two-way ranging algorithm. Wherein, the positioning antenna B and the positioning antenna C reuse the light speed and the arrival time difference TdWherein the arrival path difference Deltad is calculatedxWherein Δ dx=TdX c, c is the speed of light, according to Bx=(d+Δdx) Obtaining the relative distance B between the positioning antenna B and the positioning label1=(d+Δd1) The relative distance B between the positioning antenna C and the positioning tag2=(d+Δd2) In combination with the positioning inter-antenna distance L, L can be measured in the field, L ═ L1,L2,L3},L1Is the distance between antenna A and antenna B, L2Is the distance between antenna A and antenna C, L3For the distance between the antenna B and the antenna C, the pitch angle and the azimuth angle of the incident signal can be easily calculated by a geometric mathematical method on the premise that the length of three sides of a triangle formed by the positioning tag, the positioning antenna a and the positioning antenna B (or the positioning antenna C) is known. When the arrival angle and the relative distance of the tag signal are known, the positioning base station can determine the spatial relative position of the positioning tag relative to the positioning base station by using a geometric positioning mode, so that the positioning base station can position the positioning tag.
In some embodiments, where the first sensor comprises a UWB sensor, the first electronic device 10 needs to record the relative distance and relative orientation of the first electronic device 10 to the wearable device 20 measured using the UWB sensor at different times during the movement. For example, as shown in fig. 5, three circles represent three relative distances between the handset and the headset measured by the UWB sensor at three different times during movement, and the focal point of the three circles may represent the position of the headset.
Wherein the first electronic device 10 comprises a terminal device with computing capabilities, such as: a host, a mobile phone, a tablet, a notebook, a kiosk, or a wireless keyboard containing a processor, etc. The wearable device 20 has spatial audio functionality, such as: the wearable device 20 includes headphones or smart glasses or the like.
For example, when a user wears an earphone to listen to music or watch video from a mobile phone, when the user twists his head and moves the mobile phone without rotating his head, the virtual sound source of the spatial audio changes, and the spatial audio output by the earphone changes accordingly.
In the embodiment of the present invention, the first electronic device 10 detects, by the first sensor 11, a first relative position between the first electronic device 10 and the wearable device 20, where the first relative position includes a first relative distance and a first relative direction, so that when only the first electronic device 10 is moved without rotating the head of the user, the spatial audio output by the wearable device 10 can also be changed accordingly.
As shown in fig. 2 and 3, the audio playback system further includes a second electronic device 30. The second electronic device 30 is connected to the first electronic device either by wire or wirelessly. The first electronic device 10 is configured to transmit the screen projection data to the second electronic device 30. The second electronic device 30 is configured to receive the screen projection data and complete a display sending operation according to the screen projection data. The second electronic device 30 comprises a screen 31. The display sending operation of the second electronic device 30 includes rendering a picture according to the screen projection data and displaying the picture through the screen 31 by the second electronic device 30. Wherein the second electronic device 30 comprises a terminal device with display capability, such as: a mobile phone, a tablet, a notebook computer or a television, etc.
Further, the second electronic device 30 comprises a second sensor 32. The second sensor 32 is used to detect a second relative position between the second electronic device 30 and the wearable device 20, such as: the second sensor 32 includes a UWB sensor or a camera. The second relative position includes a second relative distance and a second relative orientation between the second electronic device 30 and the wearable device 20.
In some embodiments, as shown in FIG. 2, the screen projection data includes video data, and the second electronic device 30 is further configured to transmit the second relative position to the first electronic device 10. The first electronic device 10 is configured to transmit audio data to the wearable device 20. The first electronic device 10 is further configured to determine whether the second relative position changes, and adjust the spatial audio output by the wearable device 20 according to the change of the second relative position if it is determined that the second relative position changes. Specifically, the first electronic device 10 adjusts the audio data according to the change of the second relative position to obtain second adjusted audio data, and then adjusts the spatial audio output by the wearable device 20 by sending the second adjusted audio data to the wearable device 20, so that the virtual sound source of the spatial audio is switched from the first electronic device 10 to the second electronic device 30 in the screen projection scene.
For example: when a user projects the film onto the television from the mobile phone, the television only receives video data sent by the mobile phone, and the mobile phone still keeps sending audio data to the earphone; then the television detects a second relative position between the television and the mobile phone through a second sensor on the television, and sends the second relative position to the mobile phone; and the mobile phone judges whether the second relative position changes or not, adjusts the audio data according to the change of the second relative position if the second relative position changes, and then adjusts the spatial audio output by the earphone by sending the adjusted audio data to the earphone.
In some embodiments, as shown in fig. 2, the screen projection data includes video data and audio data, and the second electronic device 30 is further configured to determine whether the second relative position is changed, adjust the audio data according to the change of the second relative position to obtain second adjusted audio data if it is determined that the second relative position is changed, and send the second adjusted audio data to the first electronic device 10. The first electronic device 10 is further configured to adjust the spatial audio output by the wearable device 20 according to the second adjusted audio data. Specifically, the first electronic device 10 adjusts the spatial audio output by the wearable device 20 by sending the second adjusted audio data to the wearable device 20, so that the virtual sound source of the spatial audio is switched from the first electronic device 10 to the second electronic device 30 in the screen projection scene.
For example: a user wears an earphone to watch a movie from a mobile phone, and when the user projects the movie onto a television from the mobile phone, the mobile phone sends audio data and video data to the television; then the television detects a second relative position between the television and the mobile phone through a second sensor on the television, judges whether the second relative position is changed or not, adjusts audio data according to the change of the second relative position if the second relative position is judged to be changed, and then sends the adjusted audio data to the mobile phone; the mobile phone adjusts the spatial audio output by the earphone by sending the adjusted audio data to the earphone.
In some embodiments, as shown in fig. 3, when a wired connection or a wireless connection is established between the second electronic device 30 and the first electronic device, the first electronic device 10 and the wearable device 20 are disconnected, and the second electronic device 30 and the wearable device 20 are connected in a wired or wireless manner. The screen projection data includes video data and audio data. The second electronic device 30 is further configured to send audio data to the wearable device 20, determine whether the second relative position sends a change, adjust the audio data according to the change of the second relative position to obtain second adjusted audio data if it is determined that the second relative position changes, and adjust the spatial audio output by the wearable device 20 by sending the second adjusted audio data to the wearable device 20, so that a virtual sound source of the spatial audio is switched from the first electronic device 10 to the second electronic device 30 in a screen projection scene.
In the embodiment of the present invention, before detecting the relative position between the electronic device and the wearable device 20, the electronic device needs to determine whether the electronic device includes a UWB sensor, and if it is determined that the electronic device includes the UWB sensor, the UWB sensor is preferentially adopted to detect the first relative position or the second relative position; and if the UWB sensor is not included, detecting the first relative position or the second relative position by adopting the camera.
It should be noted that the wearable device 20 further includes a motion sensor, such as: acceleration markers, gyroscopes, etc. The motion sensor is capable of detecting small movements of the user's head. When the motion sensor detects that the head position of the user changes, the wearable device 20 further adjusts the spatial audio output by the wearable device 20 according to the change of the head position of the user. When the head of the user moves with a small amplitude, the electronic device may not detect the change of the relative position through the UWB sensor or the camera, so the output spatial audio is adjusted when the motion sensor on the wearable device 20 detects the change of the head position of the user, and the accuracy can be improved.
It should be noted that when the electronic device detects that the relative position between the electronic device and the wearable device 20 is not changed, the electronic device does not need to adjust the spatial audio output by the wearable device 20.
In the technical scheme of the audio playing system provided by the embodiment of the invention, on one hand, a first electronic device is connected with a wearable device worn by a user, the wearable device has a spatial audio function, the first electronic device detects a first relative position between the wearable device and the first electronic device, and the first relative position comprises a first relative distance and a first relative direction between the wearable device and the first electronic device; the first electronic equipment judges whether the first relative position changes or not, if so, the spatial audio output of the wearable equipment is adjusted according to the change of the first relative position, and when the electronic equipment is only moved but the head of a user is not rotated, the spatial audio output by the wearable equipment can be correspondingly changed. On the other hand, the second electronic equipment is in wired connection or wireless connection with the first electronic equipment, and the first electronic equipment sends screen projection data to the second electronic equipment; the second electronic equipment detects a second relative position between the second electronic equipment and the wearable equipment, and adjusts the spatial audio output by the wearable equipment according to the second relative position, so that switching of a virtual sound source of the spatial audio is realized in a screen projection scene.
Based on the architecture diagrams shown in fig. 1 and fig. 2, an embodiment of the present invention provides a signaling interaction diagram of an audio playing method, as shown in fig. 6, where the method includes:
and 102, establishing connection between the first electronic equipment and the wearable equipment.
In this embodiment of the present invention, the first electronic device includes a terminal device with computing capability, for example: a host, a mobile phone, a tablet, a notebook, a kiosk, or a wireless keyboard containing a processor, etc. The wearable device has spatial audio functionality, such as: wearable devices include headphones or smart glasses and the like. The first electronic equipment and the wearable equipment are connected in a wired or wireless mode.
And step 104, the first electronic equipment sends audio data to the wearable equipment.
In the embodiment of the invention, the wearable device comprises a receiver, and the receiver outputs spatial audio according to the audio data.
And 106, the first electronic device detects a first relative position between the first electronic device and the wearable device, judges whether the first relative position changes, and adjusts the audio data according to the change of the first relative position to obtain first adjusted audio data if the first relative position changes.
Specifically, the first electronic device includes a first sensor. The first sensor is used for detecting a first relative position between the first electronic device and the wearable device, for example: the first sensor comprises a UWB sensor or a camera. The first relative position includes a first relative distance and a first relative direction between the first electronic device and the wearable device.
And step 108, the first electronic device adjusts the spatial audio output by the wearable device by sending the first adjusted audio data to the wearable device.
It should be noted that, when the first electronic device detects that the first relative position between the first electronic device and the wearable device is not changed, the first electronic device does not need to adjust the spatial audio output by the wearable device.
It should be noted that the wearable device further includes a motion sensor, for example: acceleration markers, gyroscopes, etc. The motion sensor is capable of detecting small movements of the user's head. When the motion sensor detects that the head position of the user changes, the wearable device adjusts the spatial audio output by the wearable device according to the change of the head position of the user. When the head of the user generates small-amplitude motion, the electronic equipment may not detect the change of the relative position through the UWB sensor or the camera, so that the output spatial audio is adjusted when the motion sensor on the wearable equipment detects the change of the head position of the user, and the precision can be improved.
Step 110, the first electronic device establishes a connection with the second electronic device.
In this step, the user wants to project the video in the first electronic device onto the second electronic device for viewing, so that the first electronic device is connected to the second electronic device. Wherein the second electronic device comprises a terminal device with display capability, such as: a mobile phone, a tablet, a notebook computer or a television, etc. The second electronic equipment is connected with the first electronic equipment in a wired or wireless mode.
And step 112, the first electronic device sends screen projection data to the second electronic device, wherein the screen projection data comprises video data.
In the embodiment of the invention, the second electronic equipment receives the screen projection data and finishes the display sending operation according to the screen projection data. The second electronic device includes a screen. The display sending operation of the second electronic equipment comprises the steps that the second electronic equipment renders pictures according to the screen projection data and displays the pictures through a screen.
For example: when a user projects the film from the mobile phone to the television, the television only receives video data sent by the mobile phone, and the mobile phone still keeps sending audio data to the earphone.
Step 114, the second electronic device detects a second relative position between the second electronic device and the wearable device.
In an embodiment of the invention, the second electronic device comprises a second sensor. The second sensor is used to detect a second relative position between the second electronic device and the wearable device, for example: the second sensor comprises a UWB sensor or a camera. The second relative position includes a second relative distance and a second relative orientation between the second electronic device and the wearable device.
Step 116, the second electronic device sends the second relative position to the first electronic device.
For example: the user wears the earphone to watch the film from the mobile phone, when the user projects the film onto the television from the mobile phone, the television detects a second relative position between the television and the mobile phone through a second sensor on the television, and the second relative position is sent to the mobile phone.
Step 118, the first electronic device determines whether the second relative position changes, and adjusts the audio data according to the change of the second relative position to obtain second adjusted audio data if it is determined that the second relative position changes.
For example: and the mobile phone judges whether the second relative position sends a change, and if the second relative position is judged to be changed, the audio data is adjusted according to the change of the second relative position to obtain second adjusted audio data.
It should be noted that, when the second relative position between the second electronic device and the wearable device is not changed, the second electronic device does not need to adjust the spatial audio output by the wearable device.
And step 120, the first electronic device adjusts the spatial audio output by the wearable device by sending the second adjusted audio data to the wearable device.
For example: and the mobile phone adjusts the spatial audio output by the earphone by sending the second adjusted audio data to the earphone.
In an embodiment of the present invention, on one hand, a first electronic device is connected to a wearable device worn by a user, the wearable device has a spatial audio function, the first electronic device detects a first relative position between the wearable device and the first electronic device, and the first relative position includes a first relative distance and a first relative direction between the wearable device and the first electronic device; the first electronic equipment judges whether the first relative position changes or not, if so, the spatial audio output of the wearable equipment is adjusted according to the change of the first relative position, and when the electronic equipment is only moved but the head of a user is not rotated, the spatial audio output by the wearable equipment can be correspondingly changed. On the other hand, when the video on the first electronic device is projected to the second electronic device for watching, the second electronic device detects a second relative position between the second electronic device and the wearable device, and the second relative position is sent to the first electronic device to control the first electronic device to adjust the spatial audio output by the wearable device, so that the virtual sound source of the spatial audio is switched from the first electronic device to the second electronic device in the projection scene.
Based on the architecture diagrams shown in fig. 1 and fig. 2, a signaling interaction diagram of another audio playing method provided by the embodiment of the present invention is shown. As shown in fig. 7, the method includes:
step 202, the first electronic device establishes a connection with the wearable device.
In this embodiment of the present invention, the first electronic device includes a terminal device with computing capability, for example: a host, a mobile phone, a tablet, a notebook, a kiosk, or a wireless keyboard containing a processor, etc. The wearable device has spatial audio functionality, such as: wearable devices include headphones or smart glasses and the like. The first electronic equipment and the wearable equipment are connected in a wired or wireless mode.
And step 204, the first electronic equipment sends audio data to the wearable equipment.
In the embodiment of the invention, the wearable device comprises a receiver, and the receiver outputs spatial audio according to the audio data.
Step 206, the first electronic device detects a first relative position between the first electronic device and the wearable device, determines whether the first relative position changes, and adjusts the audio data according to the change of the first relative position to obtain first adjusted audio data if the first relative position changes.
Specifically, the first electronic device includes a first sensor. The first sensor is used for detecting a first relative position between the first electronic device and the wearable device, for example: the first sensor comprises a UWB sensor or a camera. The first relative position includes a first relative distance and a first relative direction between the first electronic device and the wearable device.
And step 208, the first electronic device adjusts the spatial audio output by the wearable device by sending the first adjusted audio data to the wearable device.
It should be noted that, when the first electronic device detects that the first relative position between the first electronic device and the wearable device is not changed, the first electronic device does not need to adjust the spatial audio output by the wearable device.
It should be noted that the wearable device further includes a motion sensor, for example: acceleration markers, gyroscopes, etc. The motion sensor is capable of detecting small movements of the user's head. When the motion sensor detects that the head position of the user changes, the wearable device adjusts the spatial audio output by the wearable device according to the change of the head position of the user. When the head of the user generates small-amplitude motion, the electronic equipment may not detect the change of the relative position through the UWB sensor or the camera, so that the output spatial audio is adjusted when the motion sensor on the wearable equipment detects the change of the head position of the user, and the precision can be improved.
Step 210, the first electronic device establishes a connection with the second electronic device.
In this step, the user wants to project the video in the first electronic device onto the second electronic device for viewing, so that the first electronic device is connected to the second electronic device. Wherein the second electronic device comprises a terminal device with display capability, such as: a mobile phone, a tablet, a notebook computer or a television, etc. The second electronic equipment is connected with the first electronic equipment in a wired or wireless mode.
Step 212, the first electronic device sends screen projection data to the second electronic device, wherein the screen projection data comprises video data and audio data.
In the embodiment of the invention, the second electronic equipment receives the screen projection data and finishes the display sending operation according to the screen projection data. The second electronic device includes a screen. The display sending operation of the second electronic equipment comprises the steps that the second electronic equipment renders pictures according to the video data and displays the pictures through a screen.
For example: the user wears the earphone to watch the film from the mobile phone, and when the user projects the film from the mobile phone to the television, the television receives video data and audio data sent by the mobile phone.
Step 214, the second electronic device detects a second relative position between the second electronic device and the wearable device, determines whether the second relative position changes, and adjusts the audio data according to the change of the second relative position to obtain second adjusted audio data if it is determined that the second relative position changes.
And step 216, the second electronic device sends the second adjusted audio data to the first electronic device.
In this step, the second electronic device adjusts the spatial audio output by the wearable device through the first electronic device by sending the second adjusted audio data to the first electronic device.
Step 218, the first electronic device sends the second adjusted audio data to the wearable device to adjust the spatial audio output by the wearable device.
In an embodiment of the present invention, on one hand, a first electronic device is connected to a wearable device worn by a user, the wearable device has a spatial audio function, the first electronic device detects a first relative position between the wearable device and the first electronic device, and the first relative position includes a first relative distance and a first relative direction between the wearable device and the first electronic device; the first electronic equipment judges whether the first relative position changes or not, if so, the spatial audio output of the wearable equipment is adjusted according to the change of the first relative position, and when the electronic equipment is only moved but the head of a user is not rotated, the spatial audio output by the wearable equipment can be correspondingly changed. On the other hand, when the video on the first electronic device is projected to the second electronic device to be watched, the second electronic device detects a second relative position between the second electronic device and the wearable device, when the second relative position changes, the audio data is adjusted according to the change of the second relative position to obtain second adjusted audio data, and the second adjusted audio data is sent to the first electronic device to control the first electronic device to adjust the spatial audio output by the wearable device, so that the virtual sound source of the spatial audio is switched from the first electronic device to the second electronic device in the screen projection scene.
Based on the architecture diagrams shown in fig. 1 and fig. 3, a signaling interaction diagram of another screen projection method provided by the embodiment of the present invention is shown. As shown in fig. 8, the method includes:
step 402, the first electronic device establishes a connection with the wearable device.
In this embodiment of the present invention, the first electronic device includes a terminal device with computing capability, for example: a host, a mobile phone, a tablet, a notebook, a kiosk, or a wireless keyboard containing a processor, etc. The wearable device has spatial audio functionality, such as: wearable devices include headphones or smart glasses and the like. The first electronic equipment and the wearable equipment are connected in a wired or wireless mode.
Step 404, the first electronic device sends audio data to the wearable device.
In the embodiment of the invention, the wearable device comprises a receiver, and the receiver outputs spatial audio according to the audio data.
Step 406, the first electronic device detects a first relative position between the first electronic device and the wearable device, determines whether the first relative position changes, and adjusts the audio data according to a change of the first relative position to obtain first adjusted audio data if it is determined that the first relative position changes.
Specifically, the first electronic device includes a first sensor. The first sensor is used for detecting a first relative position between the first electronic device and the wearable device, for example: the first sensor comprises a UWB sensor or a camera. The first relative position includes a first relative distance and a first relative direction between the first electronic device and the wearable device.
And step 408, the first electronic device adjusts the spatial audio output by the wearable device by sending the first adjusted audio data to the wearable device.
It should be noted that, when the first electronic device detects that the first relative position between the first electronic device and the wearable device is not changed, the first electronic device does not need to adjust the spatial audio output by the wearable device.
It should be noted that the wearable device further includes a motion sensor, for example: acceleration markers, gyroscopes, etc. The motion sensor is capable of detecting small movements of the user's head. When the motion sensor detects that the head position of the user changes, the wearable device adjusts the spatial audio output by the wearable device according to the change of the head position of the user. When the head of the user generates small-amplitude motion, the electronic equipment may not detect the change of the relative position through the UWB sensor or the camera, so that the output spatial audio is adjusted when the motion sensor on the wearable equipment detects the change of the head position of the user, and the precision can be improved.
And step 410, the first electronic device establishes connection with the second electronic device.
In this step, the user wants to project the video in the first electronic device onto the second electronic device for viewing, so that the first electronic device is connected to the second electronic device. Wherein the second electronic device comprises a terminal device with display capability, such as: a mobile phone, a tablet, a notebook computer or a television, etc. The second electronic equipment is connected with the first electronic equipment in a wired or wireless mode.
Step 412, the first electronic device sends screen projection data to the second electronic device, where the screen projection data includes video data and audio data.
In the embodiment of the invention, the second electronic equipment receives the screen projection data and finishes the display sending operation according to the screen projection data. The second electronic device includes a screen. The display sending operation of the second electronic equipment comprises the steps that the second electronic equipment renders pictures according to the video data and displays the pictures through a screen.
For example: the user wears the earphone to watch the film from the mobile phone, and when the user projects the film from the mobile phone to the television, the television receives video data and audio data sent by the mobile phone.
Step 414, the first electronic device is disconnected from the wearable device.
For example, a user may watch a movie from a cell phone wearing a headset, and when the user projects the movie from the cell phone onto a television, the headset and cell phone are disconnected.
And step 416, the second electronic device establishes connection with the wearable device.
For example, a user wears a headset to watch a movie from a mobile phone, and when the user projects the movie from the mobile phone to a television, the headset and the television are connected after the headset and the mobile phone are disconnected.
And 418, the second electronic device sends the audio data to the wearable device.
In this step, after the second electronic device is connected with the wearable device, the audio data is sent to the wearable device, and the virtual sound source of the wearable device is switched to the second electronic device.
Step 420, the second electronic device detects a second relative position between the second electronic device and the wearable device, determines whether the second relative position changes, and adjusts the audio data according to the change of the second relative position to obtain second adjusted audio data if it is determined that the second relative position changes.
And step 422, the second electronic device sends the second adjusted audio data to the wearable device to adjust the spatial audio output by the wearable device.
In this step, the second electronic device directly adjusts the spatial audio output by the wearable device by sending the second adjusted audio data to the first electronic device.
In an embodiment of the present invention, on one hand, a first electronic device is connected to a wearable device worn by a user, the wearable device has a spatial audio function, the first electronic device detects a first relative position between the wearable device and the first electronic device, and the first relative position includes a first relative distance and a first relative direction between the wearable device and the first electronic device; the first electronic equipment judges whether the first relative position changes or not, if so, the spatial audio output of the wearable equipment is adjusted according to the change of the first relative position, and when the electronic equipment is only moved but the head of a user is not rotated, the spatial audio output by the wearable equipment can be correspondingly changed. On the other hand, when the video on the first electronic equipment is put on the second electronic equipment for watching, the wearable equipment is disconnected with the first electronic equipment and is connected with the second electronic equipment; the second electronic equipment detects a second relative position between the second electronic equipment and the wearable equipment, when the second relative position changes, the audio data is adjusted according to the change of the second relative position to obtain second adjusted audio data, the second adjusted audio data is sent to the wearable equipment to directly adjust the spatial audio output by the wearable equipment, and therefore the virtual sound source of the spatial audio is switched into the second electronic equipment from the first electronic equipment in the screen projection scene.
Based on the architecture diagrams shown in fig. 1-3, another screen projection method provided by the embodiment of the invention is a flowchart. First electronic equipment is connected with the wearable equipment that the user wore, and wearable equipment has the space audio function, and first electronic equipment is used for sending audio data to wearable equipment. As shown in fig. 9, the method includes:
In this step, the first electronic device detects a first relative position by the first sensor. The first sensor comprises a UWB sensor or a camera.
In this step, if it is determined that the first relative position changes, the spatial audio output by the wearable device is adjusted according to the change of the first relative position. Specifically, the first electronic device adjusts the audio data according to the change of the first relative position to obtain first adjusted audio data, and adjusts the spatial audio output by the wearable device by sending the first adjusted audio data to the wearable device.
And step 508, the first electronic device establishes connection with the second electronic device.
In this step, the second electronic device detects a second relative position between the wearable device and the second electronic device through the second sensor. The second sensor comprises a UWB sensor or a camera.
It should be noted that, before step 512, the second electronic device needs to determine whether the second electronic device includes a UWB sensor, and if it is determined that the second electronic device includes the UWB sensor, the UWB sensor detects a second relative position; and if the second electronic equipment does not comprise the UWB sensor, detecting a second relative position through the camera.
And 514, adjusting the spatial audio output by the wearable device according to the second relative position by the second electronic device.
In some embodiments, when the screen projection data includes video data, step 514 specifically includes: the second electronic device sends the second relative position to the first electronic device to control the first electronic device to adjust the spatial audio output by the wearable device. It should be noted that, after step 514, the method further includes: and the first electronic equipment receives the second relative position sent by the second electronic equipment, judges whether the second relative position changes, and adjusts the spatial audio output by the wearable equipment according to the change of the second relative position if the second relative position changes.
In some embodiments, when the screen projection data includes video data and audio data, step 514 specifically includes: and judging whether the second relative position changes or not, if so, generating second adjusted audio data according to the audio data and the change of the second relative position, and sending the second adjusted audio data to the first electronic equipment to control the first electronic equipment to adjust the spatial audio output by the wearable equipment. It should be noted that, after step 514, the method further includes: and the first electronic equipment receives the second adjusted audio data sent by the second electronic equipment and adjusts the spatial audio output by the wearable equipment according to the second adjusted audio data. Specifically, the first electronic device sends the second adjusted audio data to the wearable device to adjust the spatial audio output by the wearable device.
In some embodiments, after step 508, the method further comprises: the first electronic equipment is disconnected with the wearable equipment, the second electronic equipment is connected with the wearable equipment, and the second electronic equipment sends audio data to the wearable equipment. The screen projection data includes video data and audio data, and step 514 specifically includes: and judging whether the second relative position changes or not, if so, generating second adjusted audio data according to the audio data and the change of the second relative position, and sending the second adjusted audio data to the wearable equipment to adjust the spatial audio output by the wearable equipment.
In the technical scheme of the audio playing method provided by the embodiment of the invention, on one hand, a first electronic device is connected with a wearable device worn by a user, the wearable device has a spatial audio function, the first electronic device detects a first relative position between the wearable device and the first electronic device, and the first relative position comprises a first relative distance and a first relative direction between the wearable device and the first electronic device; the first electronic equipment judges whether the first relative position changes or not, if so, the spatial audio output of the wearable equipment is adjusted according to the change of the first relative position, and when the electronic equipment is only moved but the head of a user is not rotated, the spatial audio output by the wearable equipment can be correspondingly changed. On the other hand, the second electronic equipment is in wired connection or wireless connection with the first electronic equipment, and the first electronic equipment sends screen projection data to the second electronic equipment; the second electronic equipment detects a second relative position between the second electronic equipment and the wearable equipment, and adjusts the spatial audio output by the wearable equipment according to the second relative position, so that switching of a virtual sound source of the spatial audio is realized in a screen projection scene.
Fig. 10 is a schematic structural diagram of a first electronic device according to an embodiment of the present invention, and it should be understood that the first electronic device 600 is capable of executing the steps of the first electronic device in the audio playing method, and details thereof are not described herein to avoid repetition. The first electronic device 600 includes: a first processing unit 601 and a first transceiving unit 602.
The first processing unit 601 is configured to send audio data to the wearable device, detect a first relative position between the wearable device and the first electronic device, where the first relative position includes a first relative distance and a first relative direction between the wearable device and the first electronic device, determine whether the first relative position changes, and adjust a spatial audio output by the wearable device according to a change in the first relative position if the first relative position changes.
Optionally, the first electronic device includes a first sensor, and the first processing unit 601 is specifically configured to detect the first relative position through the first sensor.
Optionally, the first processing unit 601 is specifically configured to adjust the audio data according to the change of the first relative position to obtain first adjusted audio data; the first transceiving unit 602 is configured to adjust spatial audio output by the wearable device by sending the first adjusted audio data to the wearable device.
Optionally, when the first electronic device is connected to a second electronic device and the first electronic device transmits video data to the second electronic device, the first transceiver unit 602 is further configured to receive a second relative position transmitted by the second electronic device, where the second relative position includes a second relative distance and a second relative direction between the wearable device and the second electronic device. The first processing unit 601 is further configured to determine whether the second relative position changes, and adjust a spatial audio output by the wearable device according to the change of the second relative position if it is determined that the second relative position changes.
Optionally, when the first electronic device is connected to a second electronic device and the first electronic device sends video data and audio data to the second electronic device, the first transceiver unit 602 is further configured to receive second adjusted audio data sent by the second electronic device, where the second adjusted audio data includes audio data generated by the second electronic device according to the audio data and the change of the second relative position, and adjust spatial audio output by the wearable device according to the second adjusted audio data.
Optionally, when the first electronic device is connected to a second electronic device and the first electronic device sends video data and audio data to the second electronic device, the first processing unit 601 is further configured to disconnect from the wearable device, so that the second electronic device is connected to the wearable device.
Optionally, the first sensor comprises a UWB sensor or a camera.
Optionally, the wearable device comprises a headset.
Fig. 11 is a schematic structural diagram of a second electronic device according to an embodiment of the present invention, and it should be understood that the second electronic device 700 is capable of executing the steps of the second electronic device in the audio playing method, and details thereof are not described herein to avoid repetition. The second electronic device 700 includes: a second transceiving unit 701 and a second processing unit 702.
The second transceiver unit 701 is configured to receive screen projection data sent by the first electronic device.
The second processing unit 702 is configured to detect a second relative position between the wearable device and the second electronic device, where the second relative position includes a second relative distance and a second relative direction between the wearable device and the second electronic device, and adjust a spatial audio output by the wearable device according to the second relative position.
Optionally, when the screen projection data includes video data, the second transceiving unit 701 is specifically configured to control the first electronic device to adjust spatial audio output by the wearable device by sending the second relative position to the first electronic device.
Optionally, when the screen projection data includes video data and audio data, the second processing unit 702 is specifically configured to determine whether the second relative position changes, and if it is determined that the second relative position changes, generate second adjusted audio data according to the audio data and the change of the second relative position. The second transceiving unit 701 is further specifically configured to control the first electronic device to adjust the spatial audio output by the wearable device by sending the second adjusted audio data to the first electronic device.
Optionally, when the first electronic device is disconnected from the wearable device, the second electronic device is connected to the wearable device, and the screen projection data includes video data and audio data, the second transceiver unit 701 is further specifically configured to send the audio data to the wearable device.
Optionally, the second processing unit 702 is specifically configured to determine whether the second relative position changes, and if it is determined that the second relative position changes, adjust the spatial audio output by the wearable device according to the change of the second relative position.
Optionally, the second processing unit 702 is further specifically configured to detect a second relative position between the wearable device and the second electronic device through a second sensor.
Optionally, the second sensor comprises a UWB sensor or a camera.
Optionally, the wearable device comprises a headset.
It should be understood that the first electronic device 600 and the second electronic device 700 herein are embodied in the form of functional units. The term "unit" herein may be implemented in software and/or hardware, and is not particularly limited thereto. For example, a "unit" may be a software program, a hardware circuit, or a combination of both that implement the above-described functions. The hardware circuitry may include an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (e.g., a shared processor, a dedicated processor, or a group of processors) and memory that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that support the described functionality.
Accordingly, the units of the respective examples described in the embodiments of the present invention can be realized in electronic hardware, or a combination of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The embodiment of the application provides electronic equipment which can be terminal equipment or circuit equipment arranged in the terminal equipment. The electronic device may be adapted to perform the functions/steps of the first electronic device or the second electronic device in the above-described method embodiments.
Fig. 12 is a schematic structural diagram of an electronic device 300 according to an embodiment of the present application. The electronic device 300 may include a processor 310, an external memory interface 320, an internal memory 321, a Universal Serial Bus (USB) interface 330, a charging management module 340, a power management module 341, a battery 342, an antenna 1, an antenna 2, a mobile communication module 350, a wireless communication module 360, an audio module 370, a speaker 370A, a receiver 370B, a microphone 370C, an earphone interface 370D, a sensor module 380, keys 390, a motor 391, an indicator 392, a camera 393, a display 394, and a Subscriber Identification Module (SIM) card interface 395, and the like. The sensor module 380 may include a pressure sensor 380A, a gyroscope sensor 380B, an air pressure sensor 380C, a magnetic sensor 380D, an acceleration sensor 380E, a distance sensor 380F, a proximity light sensor 380G, a fingerprint sensor 380H, a temperature sensor 380J, a touch sensor 380K, an ambient light sensor 380L, a bone conduction sensor 380M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the electronic device 300. In other embodiments of the present application, electronic device 300 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 310 for storing instructions and data. In some embodiments, the memory in the processor 310 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 310. If the processor 310 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 310, thereby increasing the efficiency of the system.
In some embodiments, processor 310 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, the processor 310 may include multiple sets of I2C buses. The processor 310 may be coupled to the touch sensor 380K, the charger, the flash, the camera 393, etc., via different I2C bus interfaces. For example: the processor 310 may be coupled to the touch sensor 380K via an I2C interface, such that the processor 310 and the touch sensor 380K communicate via an I2C bus interface to implement the touch functionality of the electronic device 300.
The I2S interface may be used for audio communication. In some embodiments, the processor 310 may include multiple sets of I2S buses. The processor 310 may be coupled to the audio module 370 via an I2S bus to enable communication between the processor 310 and the audio module 370. In some embodiments, the audio module 370 may communicate audio signals to the wireless communication module 360 via an I2S interface, enabling answering of calls via a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 370 and the wireless communication module 360 may be coupled by a PCM bus interface. In some embodiments, the audio module 370 may also transmit audio signals to the wireless communication module 360 through the PCM interface, so as to implement a function of answering a call through a bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 310 with the wireless communication module 360. For example: the processor 310 communicates with the bluetooth module in the wireless communication module 360 through the UART interface to implement the bluetooth function. In some embodiments, the audio module 370 may transmit the audio signal to the wireless communication module 360 through a UART interface, so as to realize the function of playing music through a bluetooth headset.
The MIPI interface may be used to connect processor 310 with peripheral devices such as display 394, camera 393, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, processor 310 and camera 393 communicate over a CSI interface to implement the capture functionality of electronic device 300. The processor 310 and the display screen 394 communicate via the DSI interface to implement the display functions of the electronic device 300.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 310 with the camera 393, the display 394, the wireless communication module 360, the audio module 370, the sensor module 380, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, and the like.
The USB interface 330 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 330 may be used to connect a charger to charge the electronic device 300, and may also be used to transmit data between the electronic device 300 and peripheral devices. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices, such as AR devices and the like.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is only an illustration, and does not limit the structure of the electronic device 300. In other embodiments of the present application, the electronic device 300 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 340 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 340 may receive charging input from a wired charger via the USB interface 330. In some wireless charging embodiments, the charging management module 340 may receive a wireless charging input through a wireless charging coil of the electronic device 300. The charging management module 340 may also supply power to the electronic device through the power management module 341 while charging the battery 342.
The power management module 341 is configured to connect the battery 342, the charging management module 340 and the processor 310. The power management module 341 receives input from the battery 342 and/or the charge management module 340 and provides power to the processor 310, the internal memory 321, the display 394, the camera 393, and the wireless communication module 360. The power management module 341 may also be configured to monitor parameters such as battery capacity, battery cycle count, and battery state of health (leakage, impedance). In other embodiments, the power management module 341 may also be disposed in the processor 310. In other embodiments, the power management module 341 and the charging management module 340 may be disposed in the same device.
The wireless communication function of the electronic device 300 may be implemented by the antenna 1, the antenna 2, the mobile communication module 350, the wireless communication module 360, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 300 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 350 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 300. The mobile communication module 350 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 350 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the filtered electromagnetic wave to the modem processor for demodulation. The mobile communication module 350 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 350 may be disposed in the processor 310. In some embodiments, at least some of the functional modules of the mobile communication module 350 may be disposed in the same device as at least some of the modules of the processor 310.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 370A, the receiver 370B, etc.) or displays images or video through the display 394. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be separate from the processor 310, and may be disposed in the same device as the mobile communication module 350 or other functional modules.
The wireless communication module 360 may provide solutions for wireless communication applied to the electronic device 300, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 360 may be one or more devices integrating at least one communication processing module. The wireless communication module 360 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 310. The wireless communication module 360 may also receive a signal to be transmitted from the processor 310, frequency-modulate and amplify the signal, and convert the signal into electromagnetic waves via the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 300 is coupled to mobile communication module 350 and antenna 2 is coupled to wireless communication module 360 such that electronic device 300 may communicate with networks and other devices via wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 300 implements display functions via the GPU, the display 394, and the application processor, among other things. The GPU is an image processing microprocessor coupled to a display 394 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 310 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 394 is used to display images, video, and the like. The display screen 394 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device 300 may include 1 or N display screens 394, N being a positive integer greater than 1.
The electronic device 300 may implement a shooting function through the ISP, the camera 393, the video codec, the GPU, the display 394, the application processor, and the like.
The ISP is used to process the data fed back by the camera 393. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be located in camera 393.
Camera 393 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, electronic device 300 may include 1 or N cameras 393, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 300 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 300 may support one or more video codecs. In this way, the electronic device 300 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. The NPU can realize applications such as intelligent recognition of the electronic device 300, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 320 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 300. The external memory card communicates with the processor 310 through the external memory interface 320 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 321 may be used to store computer-executable program code, which includes instructions. The internal memory 321 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The data storage area may store data (e.g., audio data, phone book, etc.) created during use of the electronic device 300, and the like. In addition, the internal memory 321 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like. The processor 310 executes various functional applications of the electronic device 300 and data processing by executing instructions stored in the internal memory 321 and/or instructions stored in a memory provided in the processor.
The electronic device 300 may implement audio functions through the audio module 370, the speaker 370A, the receiver 370B, the microphone 370C, the earphone interface 370D, and the application processor. Such as music playing, recording, etc.
The audio module 370 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 370 may also be used to encode and decode audio signals. In some embodiments, the audio module 370 may be disposed in the processor 310, or some functional modules of the audio module 370 may be disposed in the processor 310.
The speaker 370A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The electronic device 300 can listen to music through the speaker 370A or listen to a hands-free conversation.
The receiver 370B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic device 300 receives a call or voice information, it can receive voice by placing the receiver 370B close to the ear of the person.
The headphone interface 370D is used to connect wired headphones. The headset interface 370D may be the USB interface 330, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 380A is used for sensing a pressure signal, and converting the pressure signal into an electrical signal. In some embodiments, the pressure sensor 380A may be disposed on the display screen 394. Pressure sensor 380A.
Such as resistive pressure sensors, inductive pressure sensors, capacitive pressure sensors, etc. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 380A, the capacitance between the electrodes changes. The electronic device 300 determines the intensity of the pressure from the change in capacitance. When a touch operation is applied to the display screen 394, the electronic apparatus 300 detects the intensity of the touch operation according to the pressure sensor 380A. The electronic apparatus 300 may also calculate the touched position from the detection signal of the pressure sensor 380A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 380B may be used to determine the motion pose of the electronic device 300. In some embodiments, the angular velocity of electronic device 300 about three axes (i.e., the x, y, and z axes) may be determined by gyroscope sensor 380B. The gyro sensor 380B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 380B detects the shake angle of the electronic device 300, calculates the distance to be compensated for by the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 300 through a reverse movement, thereby achieving anti-shake. The gyro sensor 380B may also be used for navigation, somatosensory gaming scenes.
The air pressure sensor 380C is used to measure air pressure. In some embodiments, electronic device 300 calculates altitude, aiding in positioning and navigation, from barometric pressure values measured by barometric pressure sensor 380C.
The magnetic sensor 380D includes a hall sensor. The electronic device 300 may detect the opening and closing of the flip holster using the magnetic sensor 380D. In some embodiments, when the electronic device 300 is a flip phone, the electronic device 300 may detect the opening and closing of the flip according to the magnetic sensor 380D. And then according to the opening and closing state of the leather sheath or the opening and closing state of the flip cover, the automatic unlocking of the flip cover is set.
The acceleration sensor 380E may detect the magnitude of acceleration of the electronic device 300 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device 300 is stationary. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 380F for measuring distance. The electronic device 300 may measure the distance by infrared or laser. In some embodiments, taking a picture of a scene, the electronic device 300 may utilize the distance sensor 380F to range for fast focus.
The proximity light sensor 380G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 300 emits infrared light to the outside through the light emitting diode. The electronic device 300 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 300. When insufficient reflected light is detected, the electronic device 300 may determine that there are no objects near the electronic device 300. The electronic device 300 can utilize the proximity light sensor 380G to detect that the user holds the electronic device 300 close to the ear for talking, so as to automatically turn off the screen to achieve the purpose of saving power. The proximity light sensor 380G may also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.
The ambient light sensor 380L is used to sense the ambient light level. The electronic device 300 may adaptively adjust the brightness of the display 394 based on the perceived ambient light level. The ambient light sensor 380L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 380L may also cooperate with the proximity light sensor 380G to detect whether the electronic device 300 is in a pocket to prevent inadvertent contact.
The fingerprint sensor 380H is used to capture a fingerprint. The electronic device 300 may utilize the collected fingerprint characteristics to implement fingerprint unlocking, access an application lock, fingerprint photographing, fingerprint incoming call answering, and the like.
The temperature sensor 380J is used to detect temperature. In some embodiments, the electronic device 300 implements a temperature processing strategy using the temperature detected by the temperature sensor 380J. For example, when the temperature reported by the temperature sensor 380J exceeds a threshold, the electronic device 300 performs a reduction in performance of a processor located near the temperature sensor 380J, so as to reduce power consumption and implement thermal protection. In other embodiments, the electronic device 300 heats the battery 342 when the temperature is below another threshold to avoid the low temperature causing the electronic device 300 to shut down abnormally. In other embodiments, when the temperature is below a further threshold, the electronic device 300 performs a boost on the output voltage of the battery 342 to avoid an abnormal shutdown due to low temperature.
The touch sensor 380K is also referred to as a "touch device". The touch sensor 380K may be disposed on the display screen 394, and the touch sensor 380K and the display screen 394 form a touch screen, which is also referred to as a "touch screen". The touch sensor 380K is used to detect a touch operation applied thereto or thereabout. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided via the display 394. In other embodiments, the touch sensor 380K can be disposed on a surface of the electronic device 300 at a different location than the display 394.
The bone conduction sensor 380M can acquire a vibration signal. In some embodiments, the bone conduction transducer 380M can acquire a vibration signal of the vibrating bone mass of the human voice. The bone conduction sensor 380M may also contact the human body pulse to receive the blood pressure pulsation signal. In some embodiments, the bone conduction sensor 380M may also be disposed in a headset, integrated into a bone conduction headset. The audio module 370 may analyze a voice signal based on the vibration signal of the bone mass vibrated by the sound part acquired by the bone conduction sensor 380M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 380M, so as to realize the heart rate detection function.
Keys 390 include a power-on key, a volume key, etc. The keys 390 may be mechanical keys. Or may be touch keys. The electronic device 300 may receive a key input, and generate a key signal input related to user setting and function control of the electronic device 300.
The motor 391 may generate a vibration cue. The motor 391 may be used for both incoming call vibration prompting and touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 391 may also respond to different vibration feedback effects by performing touch operations on different areas of the display 394. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 392 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 395 is for connecting a SIM card. The SIM card can be brought into and out of contact with the electronic apparatus 300 by being inserted into and pulled out of the SIM card interface 395. The electronic device 300 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 395 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. Multiple cards can be inserted into the same SIM card interface 395 at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 395 may also be compatible with different types of SIM cards. The SIM card interface 395 may also be compatible with an external memory card. The electronic device 300 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the electronic device 300 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 300 and cannot be separated from the electronic device 300.
The present application provides a computer-readable storage medium, which stores instructions that, when executed on a terminal device, cause the terminal device to perform the functions/steps of the first electronic device or the second electronic device as in the above method embodiments.
Embodiments of the present application further provide a computer program product containing instructions, which when executed on a computer or any at least one processor, causes the computer to perform the functions/steps of the first electronic device or the second electronic device in the above method embodiments.
In the embodiments of the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, and means that there may be three relationships, for example, a and/or B, and may mean that a exists alone, a and B exist simultaneously, and B exists alone. Wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" and similar expressions refer to any combination of these items, including any combination of singular or plural items. For example, at least one of a, b, and c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.
Those of ordinary skill in the art will appreciate that the various elements and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, computer software, or combinations of electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, any function, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing an electronic device (which may be a personal computer, a server, or a network device) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present disclosure, and all the changes or substitutions should be covered by the protection scope of the present application. The protection scope of the present application shall be subject to the protection scope of the claims.
Claims (19)
1. An audio playing method is applied to a first electronic device, the first electronic device is connected with a wearable device worn by a user, the wearable device has a spatial audio function, and the first electronic device is used for sending audio data to the wearable device, and the method includes:
detecting a first relative position of the wearable device and the first electronic device, the first relative position including a first relative distance and a first relative direction between the wearable device and the first electronic device;
judging whether the first relative position changes or not;
and if the first relative position is judged to be changed, adjusting the spatial audio output by the wearable device according to the change of the first relative position.
2. The method of claim 1, wherein the first electronic device comprises a first sensor, and wherein detecting the first relative position of the wearable device and the first electronic device comprises:
detecting the first relative position by the first sensor.
3. The method of claim 1, wherein the adjusting the spatial audio output by the wearable device according to the change in the first relative position comprises:
adjusting the audio data according to the change of the first relative position to obtain first adjusted audio data;
and adjusting the spatial audio output by the wearable device by sending the first adjusted audio data to the wearable device.
4. The method of claim 1, wherein when the first electronic device is connected to a second electronic device, the first electronic device transmitting video data to the second electronic device, the method further comprises:
receiving a second relative position sent by the second electronic device, wherein the second relative position comprises a second relative distance and a second relative direction between the wearable device and the second electronic device;
judging whether the second relative position changes;
and if the second relative position is judged to be changed, adjusting the spatial audio output by the wearable device according to the change of the second relative position.
5. The method of claim 1, wherein when the first electronic device is connected to a second electronic device, the first electronic device transmitting video data and audio data to the second electronic device, the method further comprises:
receiving second adjusted audio data sent by the second electronic device, wherein the second adjusted audio data comprise audio data generated by the second electronic device according to the audio data and the change of a second relative position;
and adjusting the spatial audio output by the wearable device according to the second adjusted audio data.
6. The method of claim 1, wherein when the first electronic device is connected to a second electronic device, the first electronic device transmitting video data and audio data to the second electronic device, the method further comprises:
disconnect from the wearable device to connect the second electronic device with the wearable device.
7. The method of claim 2, wherein the first sensor comprises a UWB sensor or a camera.
8. The method of any one of claims 1-7, wherein the wearable device comprises a headset.
9. An audio playing method is applied to a second electronic device, the second electronic device is connected with a first electronic device, the first electronic device is connected with a wearable device worn by a user, the wearable device has a spatial audio function, the second electronic device is used for receiving screen projection data sent by the first electronic device, and the method comprises the following steps:
detecting a second relative position between the wearable device and the second electronic device, the second relative position including a second relative distance and a second relative direction between the wearable device and the second electronic device;
and adjusting the spatial audio output by the wearable device according to the second relative position.
10. The method of claim 9, wherein when the screen projection data comprises video data, the adjusting spatial audio output by the wearable device according to the second relative position comprises:
controlling the first electronic device to adjust the spatial audio output by the wearable device by sending the second relative position to the first electronic device.
11. The method of claim 9, wherein when the screen projection data comprises video data and audio data, the adjusting spatial audio output by the wearable device according to the second relative position comprises:
judging whether the second relative position changes;
if the second relative position is judged to be changed, second adjusted audio data are generated according to the audio data and the change of the second relative position;
and sending the second adjusted audio data to the first electronic equipment to control the first electronic equipment to adjust the spatial audio output by the wearable equipment.
12. The method of claim 9, wherein when the first electronic device is disconnected from the wearable device, the second electronic device is connected to the wearable device, and the screen projection data comprises video data and audio data, the method further comprises:
transmitting the audio data to the wearable device.
13. The method of claim 12, wherein adjusting spatial audio output by the wearable device according to the second relative position comprises:
judging whether the second relative position changes;
and if the second relative position is judged to be changed, adjusting the spatial audio output by the wearable device according to the change of the second relative position.
14. The method of claim 12, wherein the detecting a second relative position between the wearable device and the second electronic device comprises:
detecting a second relative position between the wearable device and the second electronic device through a second sensor.
15. The method of claim 14, wherein the second sensor comprises a UWB sensor or a camera.
16. The method of any one of claims 9-15, wherein the wearable device comprises an earphone.
17. An audio playback system, comprising:
a first electronic device for performing the method of any of claims 1-8 and/or a second electronic device for performing the method of any of claims 9-16;
wearable equipment, wearable equipment has the space audio frequency function.
18. An electronic device comprising a processor and a memory, wherein the memory is configured to store a computer program comprising program instructions that, when executed by the processor, cause the electronic device to perform the method of any of claims 1-16.
19. A computer-readable storage medium, characterized in that it stores a computer program comprising program instructions which, when executed by a computer, cause the computer to perform the method according to any one of claims 1-16.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210174200.3A CN114257920B (en) | 2022-02-25 | 2022-02-25 | Audio playing method and system and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210174200.3A CN114257920B (en) | 2022-02-25 | 2022-02-25 | Audio playing method and system and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114257920A true CN114257920A (en) | 2022-03-29 |
CN114257920B CN114257920B (en) | 2022-07-29 |
Family
ID=80797003
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210174200.3A Active CN114257920B (en) | 2022-02-25 | 2022-02-25 | Audio playing method and system and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114257920B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP4132013A1 (en) * | 2021-08-06 | 2023-02-08 | Beijing Xiaomi Mobile Software Co., Ltd. | Audio signal processing method, electronic apparatus, and storage medium |
CN116700659A (en) * | 2022-09-02 | 2023-09-05 | 荣耀终端有限公司 | Interface interaction method and electronic equipment |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000350299A (en) * | 2000-01-01 | 2000-12-15 | Sony Corp | Sound signal reproducing device |
CN103004238A (en) * | 2010-06-29 | 2013-03-27 | 阿尔卡特朗讯 | Facilitating communications using a portable communication device and directed sound output |
CN105684012A (en) * | 2013-10-25 | 2016-06-15 | 诺基亚技术有限公司 | Providing contextual information |
CN110121695A (en) * | 2016-12-30 | 2019-08-13 | 诺基亚技术有限公司 | Device and associated method in field of virtual reality |
CN110559127A (en) * | 2019-08-27 | 2019-12-13 | 上海交通大学 | intelligent blind assisting system and method based on auditory sense and tactile sense guide |
CN112425187A (en) * | 2018-05-18 | 2021-02-26 | 诺基亚技术有限公司 | Method and apparatus for implementing head tracking headphones |
US11102578B1 (en) * | 2018-09-27 | 2021-08-24 | Apple Inc. | Audio system and method of augmenting spatial audio rendition |
CN113692750A (en) * | 2019-04-09 | 2021-11-23 | 脸谱科技有限责任公司 | Sound transfer function personalization using sound scene analysis and beamforming |
WO2021233079A1 (en) * | 2020-05-18 | 2021-11-25 | 荣耀终端有限公司 | Cross-device content projection method, and electronic device |
US20210397249A1 (en) * | 2020-06-19 | 2021-12-23 | Apple Inc. | Head motion prediction for spatial audio applications |
CN113890932A (en) * | 2020-07-02 | 2022-01-04 | 华为技术有限公司 | Audio control method and system and electronic equipment |
US20220004315A1 (en) * | 2018-11-14 | 2022-01-06 | Huawei Technologies Co., Ltd. | Multimedia Data Playing Method and Electronic Device |
US20220019403A1 (en) * | 2020-07-20 | 2022-01-20 | Apple Inc. | Systems, Methods, and Graphical User Interfaces for Selecting Audio Output Modes of Wearable Audio Output Devices |
-
2022
- 2022-02-25 CN CN202210174200.3A patent/CN114257920B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000350299A (en) * | 2000-01-01 | 2000-12-15 | Sony Corp | Sound signal reproducing device |
CN103004238A (en) * | 2010-06-29 | 2013-03-27 | 阿尔卡特朗讯 | Facilitating communications using a portable communication device and directed sound output |
CN105684012A (en) * | 2013-10-25 | 2016-06-15 | 诺基亚技术有限公司 | Providing contextual information |
CN110121695A (en) * | 2016-12-30 | 2019-08-13 | 诺基亚技术有限公司 | Device and associated method in field of virtual reality |
CN112425187A (en) * | 2018-05-18 | 2021-02-26 | 诺基亚技术有限公司 | Method and apparatus for implementing head tracking headphones |
US11102578B1 (en) * | 2018-09-27 | 2021-08-24 | Apple Inc. | Audio system and method of augmenting spatial audio rendition |
US20220004315A1 (en) * | 2018-11-14 | 2022-01-06 | Huawei Technologies Co., Ltd. | Multimedia Data Playing Method and Electronic Device |
CN113692750A (en) * | 2019-04-09 | 2021-11-23 | 脸谱科技有限责任公司 | Sound transfer function personalization using sound scene analysis and beamforming |
CN110559127A (en) * | 2019-08-27 | 2019-12-13 | 上海交通大学 | intelligent blind assisting system and method based on auditory sense and tactile sense guide |
WO2021233079A1 (en) * | 2020-05-18 | 2021-11-25 | 荣耀终端有限公司 | Cross-device content projection method, and electronic device |
US20210397249A1 (en) * | 2020-06-19 | 2021-12-23 | Apple Inc. | Head motion prediction for spatial audio applications |
CN113890932A (en) * | 2020-07-02 | 2022-01-04 | 华为技术有限公司 | Audio control method and system and electronic equipment |
US20220019403A1 (en) * | 2020-07-20 | 2022-01-20 | Apple Inc. | Systems, Methods, and Graphical User Interfaces for Selecting Audio Output Modes of Wearable Audio Output Devices |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP4132013A1 (en) * | 2021-08-06 | 2023-02-08 | Beijing Xiaomi Mobile Software Co., Ltd. | Audio signal processing method, electronic apparatus, and storage medium |
US11950087B2 (en) | 2021-08-06 | 2024-04-02 | Beijing Xiaomi Mobile Software Co., Ltd. | Audio signal processing method, electronic apparatus, and storage medium |
CN116700659A (en) * | 2022-09-02 | 2023-09-05 | 荣耀终端有限公司 | Interface interaction method and electronic equipment |
CN116700659B (en) * | 2022-09-02 | 2024-03-08 | 荣耀终端有限公司 | Interface interaction method and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN114257920B (en) | 2022-07-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110798568B (en) | Display control method of electronic equipment with folding screen and electronic equipment | |
CN113810601B (en) | Terminal image processing method and device and terminal equipment | |
CN111182140B (en) | Motor control method and device, computer readable medium and terminal equipment | |
CN114489533A (en) | Screen projection method and device, electronic equipment and computer readable storage medium | |
CN114257920B (en) | Audio playing method and system and electronic equipment | |
CN110557740A (en) | Electronic equipment control method and electronic equipment | |
CN110012130A (en) | A kind of control method and electronic equipment of the electronic equipment with Folding screen | |
CN111526407B (en) | Screen content display method and device | |
CN114422340A (en) | Log reporting method, electronic device and storage medium | |
CN114466107A (en) | Sound effect control method and device, electronic equipment and computer readable storage medium | |
CN113225661A (en) | Loudspeaker identification method and device and electronic equipment | |
CN114339429A (en) | Audio and video playing control method, electronic equipment and storage medium | |
WO2022206825A1 (en) | Method and system for adjusting volume, and electronic device | |
CN114500901A (en) | Double-scene video recording method and device and electronic equipment | |
CN113518189B (en) | Shooting method, shooting system, electronic equipment and storage medium | |
CN115514844A (en) | Volume adjusting method, electronic equipment and system | |
CN114661258A (en) | Adaptive display method, electronic device, and storage medium | |
CN109285563B (en) | Voice data processing method and device in online translation process | |
WO2023030067A1 (en) | Remote control method, remote control device and controlled device | |
CN113129916A (en) | Audio acquisition method, system and related device | |
CN113596320B (en) | Video shooting variable speed recording method, device and storage medium | |
CN113923351B (en) | Method, device and storage medium for exiting multi-channel video shooting | |
CN115393676A (en) | Gesture control optimization method and device, terminal and storage medium | |
CN115145517A (en) | Screen projection method, electronic equipment and system | |
CN111432156A (en) | Image processing method and device, computer readable medium and terminal equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20220610 Address after: 100080 floors 2-14, building 3, yard 5, honeysuckle Road, Haidian District, Beijing Applicant after: Beijing Honor Device Co.,Ltd. Address before: Unit 3401, unit a, building 6, Shenye Zhongcheng, No. 8089, Hongli West Road, Donghai community, Xiangmihu street, Futian District, Shenzhen, Guangdong 518040 Applicant before: Honor Device Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |