CN112765395B - Audio playing method, electronic device and storage medium - Google Patents

Audio playing method, electronic device and storage medium Download PDF

Info

Publication number
CN112765395B
CN112765395B CN202110090450.4A CN202110090450A CN112765395B CN 112765395 B CN112765395 B CN 112765395B CN 202110090450 A CN202110090450 A CN 202110090450A CN 112765395 B CN112765395 B CN 112765395B
Authority
CN
China
Prior art keywords
audio
user
value
played
playing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110090450.4A
Other languages
Chinese (zh)
Other versions
CN112765395A (en
Inventor
庄晓亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
MIGU Music Co Ltd
MIGU Culture Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
MIGU Music Co Ltd
MIGU Culture Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, MIGU Music Co Ltd, MIGU Culture Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202110090450.4A priority Critical patent/CN112765395B/en
Publication of CN112765395A publication Critical patent/CN112765395A/en
Application granted granted Critical
Publication of CN112765395B publication Critical patent/CN112765395B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/635Filtering based on additional data, e.g. user or group profiles
    • G06F16/636Filtering based on additional data, e.g. user or group profiles by using biological or physiological data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/635Filtering based on additional data, e.g. user or group profiles
    • G06F16/637Administration of user profiles, e.g. generation, initialization, adaptation or distribution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics

Abstract

The embodiment of the invention relates to the technical field of intelligent music playing, and discloses an audio playing method, electronic equipment and a storage medium. The audio playing method comprises the following steps: determining the position relation between a user and a loudspeaker in an area where the user is located; determining the sound relation between the environment sound of the area where the user is located and the audio currently played by the loudspeaker; determining a filter coefficient of the loudspeaker according to the position relation and the sound relation; and filtering the audio according to the filter coefficient, and focusing the filtered audio to the user, so that the user can listen to the music played by the user without wearing a headset to separate from the music playing device which is required to be carried with the user, and convenience is provided for the user.

Description

Audio playing method, electronic device and storage medium
Technical Field
The embodiment of the invention relates to the technical field of intelligent music playing, in particular to an audio playing method, electronic equipment and a storage medium.
Background
In a gym sport scenario, a user will typically use an audio playing device to play music, so as to reduce boring caused by repeated monotonous sports and improve comfort during gym sports. The prior art ways of playing music include: mode one: audio is played through portable playback devices (e.g., cell phones, MP 3) using either a headset or an in-ear sports headset. Mode two: music is played through the gym's broadcast system.
However, the inventors found that there are at least the following problems in the related art: the first mode needs to wear the earphone on the body of the user, so that the movement of the user is influenced, and the earphone is inconvenient for the user. In the second mode, the music played by the broadcasting system usually uses unified background music, and cannot adapt to the preference of everyone.
Disclosure of Invention
The embodiment of the invention aims to provide a music playing method, electronic equipment and storage medium, so that a user can listen to music played by the user without wearing a headset to separate from the music playing equipment which needs to be carried around, and convenience is provided for the user.
In order to solve the above technical problems, an embodiment of the present invention provides an audio playing method, including: determining the position relation between a user and a loudspeaker in an area where the user is located; determining the sound relation between the environment sound of the area where the user is located and the audio currently played by the loudspeaker; determining a filter coefficient of the loudspeaker according to the position relation and the sound relation; and filtering the audio according to the filter coefficient, and focusing the filtered audio to the user.
The embodiment of the invention also provides electronic equipment, which comprises: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the audio playback method described above.
The embodiment of the invention also provides a computer readable storage medium storing a computer program which when executed by a processor realizes the audio playing method.
According to the embodiment of the invention, the relation between the user and the loudspeaker is determined; wherein, the speaker is set up in the region that the user is located, the relation includes: the position relation between the user and the loudspeaker and/or the sound relation between the environment sound of the area where the user is located and the audio currently played by the loudspeaker; determining filter coefficients of the loudspeaker according to the relation; and filtering the audio currently played by the loudspeaker according to the filter coefficient, and focusing the filtered audio to the area where the user is located. The position relation between the user and the loudspeaker can reflect the distance between the user and the loudspeaker, the sound relation between the environment sound of the area where the user is located and the current played audio can reflect the interference degree of the played audio which can be actually heard by the user by the environment sound, and the filter coefficient determined by combining the position relation and/or the sound relation is beneficial to enabling the filtered music to adapt to the position relation between the user and the loudspeaker and/or reducing the interference degree of the environment sound to the played audio. Focusing the filtered audio to the region where the user is located, that is, focusing the filtered music to the region where the user is located, so that the user can hear the filtered music as much as possible, and other users in other regions can hardly hear the filtered music, thereby avoiding the influence of the music played by the loudspeaker arranged in the region where the user is located on other users in other regions, being beneficial to playing different music for the users in different regions through the loudspeaker arranged in different regions, and adapting to personal preferences of different users. According to the music playing method, a user can listen to music played by the user without wearing the earphone to be separated from the music playing equipment which needs to be carried, convenience is provided for the user, and when the user is in a body-building area, body-building experience of the user is improved.
In addition, the determining the filter coefficient of the speaker according to the relation includes: acquiring current motion data of the user, and determining an adjustment coefficient according to the motion data; and determining the filter coefficient of the loudspeaker according to the adjustment coefficient and the relation. The motion data can embody the current motion state of the user, and the filter coefficient is determined by further combining the adjustment coefficient determined based on the current motion data of the user on the basis of the position relationship and/or the sound relationship, so that the filtered music can also adapt to the current motion state of the user, and the user experience of listening to the music in the motion process is improved.
In addition, the motion data comprises a motion duration and/or a motion speed, the adjustment coefficient comprises a first adjustment coefficient and/or a second adjustment coefficient, and the determining the adjustment coefficient according to the motion data comprises: determining a first adjustment coefficient corresponding to a time length difference value according to the time length difference value between the current time length of the user and a preset time length of the user; and/or determining a second adjustment coefficient corresponding to the speed difference value according to the speed difference value between the current movement speed of the user and the preset movement speed. The difference of the time length can reflect the difference between the actual movement time length and the expected movement time length of the user, the difference of the speed can reflect the difference between the actual movement speed and the expected movement speed of the user, the first adjustment coefficient can be more reasonably determined according to the difference of the time length, the second adjustment coefficient can be more reasonably determined according to the difference of the speed, and therefore the rationality of the determined filter coefficient can be further improved.
In addition, when the audio currently played by the speaker is played to a preset progress, the method further comprises: acquiring a current heart rate value and a recommended heart rate value of the user; if the current heart rate value is larger than the recommended heart rate value, determining a target audio closest to the current played audio based on a list, and taking the target audio as the audio played by the loudspeaker after the current played audio is played; or, obtaining a heart rate difference value of the current heart rate value and the recommended heart rate value, determining an audio matched with the heart rate difference value according to a recommended playing value of each audio after the current playing audio in a list, and taking the matched audio as the audio played by the loudspeaker after the current playing audio is played; the recommended playing value of the target audio is smaller than or equal to a preset playing threshold, and the recommended playing value is used for representing the excitation degree of the target audio. By combining the excitation degree of the audio, the current heart rate value of the user and the recommended heart rate value of the user, the audio played by the loudspeaker after the current played audio is played can be more reasonably determined, and the coincidence degree of the audio played by the loudspeaker and the heart rate of the user is improved. For example, if the current heart rate value is larger than the recommended heart rate value, i.e. the current heart rate value is larger, in order to make the current heart rate value of the user lower, if the excitation degree of the next audio is lower, the next audio can be used as the audio to be played by the next speaker, so that the possibility that the heart rate value is lowered after the user listens to the next audio is improved.
Drawings
One or more embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings.
Fig. 1 is a flowchart of an audio playing method according to a first embodiment of the present application;
FIG. 2 is a flow chart of sub-steps of step 102 mentioned in the first embodiment of the application;
fig. 3 is a flowchart of an audio playing method according to a second embodiment of the present application;
fig. 4 is a flow chart of sub-steps of step 305 mentioned in a second embodiment of the application;
fig. 5 is a schematic structural view of an electronic device according to a third embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the following detailed description of the embodiments of the present application will be given with reference to the accompanying drawings. However, those of ordinary skill in the art will understand that in various embodiments of the present application, numerous technical details have been set forth in order to provide a better understanding of the present application. However, the claimed application may be practiced without these specific details and with various changes and modifications based on the following embodiments. The following embodiments are divided for convenience of description, and should not be construed as limiting the specific implementation of the present application, and the embodiments can be mutually combined and referred to without contradiction.
The inventor of the present application considers that in the exercise scenario of the gym, in the manner that the headset or the in-ear exercise headset used in the prior art plays the audio through the portable playing device (such as the mobile phone and the MP 3), because the user is usually fierce in the exercise, the two types of headset are not fixed firmly, which affects the exercise of the user, or need to be fixed firmly on the user, and the fixed part of the headset and the user is very uncomfortable. And traditional sports earphone embeds earplug, bone conduction earphone and bluetooth module, and sound transmission quality and sound transmission effect are not good when sports and outdoor external interference are stronger (the gymnasium belongs to the stronger venue of sound source interference), and then influence user experience. In the prior art, when a broadcasting system of a gymnasium is used for listening to music, as most gymnasium is an open space, and users in the gymnasium are more, and music rhythms suitable for different exercises are different, the music played by the broadcasting system cannot adapt to the preference of each person. And in general, unified background music is used, so that corresponding music cannot be provided according to different sports training, and a motion guidance course required by different sports cannot be provided.
In order to solve the above-mentioned problems in the prior art, the first embodiment of the present invention provides an audio playing method, so that a user can listen to music played by the user without wearing headphones to separate from the music playing device to be carried with the user, thereby providing convenience for the user and improving the exercise experience of the user. The application scenario of the audio playing method in this embodiment may be: a user needs to listen to a music scene in an open space where a speaker is provided, for example, an open space such as a gym. The music playing method in this embodiment is applied to an electronic device, which may be a music playing device or a cloud server, and the implementation details of the audio playing method in this embodiment are specifically described below by taking the electronic device as an example of the cloud server, and the following is only implementation details provided for convenience in understanding, but is not necessary to implement this embodiment.
As shown in fig. 1, a flowchart of an audio playing method in this embodiment includes:
step 101: a relationship between the user and the speaker is determined.
Step 102: and determining the filter coefficient of the loudspeaker according to the relation.
Step 103: and filtering the audio currently played by the loudspeaker according to the filter coefficient, and focusing the filtered audio to the area where the user is located.
Wherein the speaker is disposed in an area where the user is located. The area in which the user is located may be one of a plurality of individual sub-areas divided based on the entire exercise area of the exercise room. For example, the gymnasium is based on the acoustic focusing technology, each gymnasium is placed in a partition mode to form a plurality of logically independent subareas, and each subarea is provided with a loudspeaker, wherein the loudspeaker can be arranged on a ceiling above the subarea or on the ground, and the loudspeaker can be arranged above an area where a user is located, for example, the loudspeakers in different subareas are arranged right above the subarea. And can be arranged at other places according to actual needs, and the embodiment is not particularly limited to the above. The number of speakers provided in one sub-area may be set according to actual needs, which is not particularly limited in this embodiment.
In one example, a single sub-area is the area of the treadmill where the speakers are all arranged based on focusing technology, and the exerciser stands on the treadmill with the ears at the sound focus positions of the speakers. That is, the exerciser on the treadmill can hear the sound played by the speakers provided in the sub-area where he/she is located, and the exerciser in other areas (such as the sub-area where the chest machine is located) cannot hear the sound. As in gymnasiums, a loudspeaker with an acoustic focusing system is arranged directly above the different sub-areas. A speaker with an acoustic focusing system focuses and projects the zone-specific acoustic wave into a sub-zone below.
In a specific implementation, the loudspeaker comprises a filtering module, wherein the filtering module is used for filtering sound output by the loudspeaker, and the filtering degree can be determined according to the relation between a user and the loudspeaker, the current motion state of the user and the like.
In one example, each sub-area is provided with an indoor sensing system, which may include information acquisition devices such as pressure sensing floors, instrument sensors, distance sensors, and the like. The indoor induction system can induce indoor attributes (such as temperature, humidity and noisy degree), and can also collect body builder position information, exercise sign monitoring data and the like.
In a specific implementation, after a user enters a gym, the cloud server can acquire movement data of the user. The athletic data includes user data and device data. The user can log in the cloud server after entering the body building, if the user enters the gym and wears wearing equipment such as a bracelet, the bracelet is bound with the user, the user logs in through the bracelet, and the cloud server can track the position of the bracelet, so that user data collected by the bracelet and equipment data measured by corresponding fitness equipment on the position of the bracelet are obtained.
The user data may be heart rate, body temperature, movement speed, movement duration and other data collected through the bracelet. The device data may be data measured by a device used by the exerciser (the device used by the exerciser can be determined by the corresponding exercise device on the position of the wristband), such as a treadmill, including grade, duration of exercise, speed, heart rate, etc. Wherein the user data and the device data may have stored therein duplicate data, such as heart rate, speed, etc., all collected therein.
Optionally, the bracelet may also be used to open a locker, record user behavior, historical fitness data, body metrics, fitness courses, body data collection, and the like.
After a user enters the gym, the cloud server can also determine the area where the user is located and acquire the sensing data of the indoor sensing system of the area where the user is located. The cloud server can determine the area where the user is located, namely which sub-area in the gym, by tracking the position of the bracelet. Then, the cloud server may acquire the sensing data according to the indoor sensing system disposed in the sub-area. The sensing data comprises environmental data and character data; the environmental data is data of the environment of the area where the user is located, such as the sound energy value of the environmental sound of the area where the user is located, the floor pressure of the area where the user is located, and the like. The character data is data of the user sensed by the indoor sensing system in the area where the user is located, such as the current maximum height of the user (such as obtained by measuring the current height of the body builder), and the height value of the lowest position of the head of the user (such as the height of the chin of the user from the ground), and the position of the chin can be obtained by identifying the head of the user, so that the height of the chin from the ground can be obtained.
In one example, the sound-absorbing sponge layer is wrapped on the gymnastic equipment, and the sound-absorbing cotton pad is paved on the floor, so that the reflection of focused and projected sound waves to the periphery is reduced, and the interference between adjacent subareas in the gymnasium is reduced.
Based on the structure of the gymnasium, namely the partition arrangement of the loudspeaker, the arrangement of the indoor induction system and the collection of the motion data and the induction data, each gymnasium person can play music through the music playing method of the embodiment.
In step 101, the relationship between the user and the speaker includes: the positional relationship between the user (e.g., the exerciser) and the speaker and/or the acoustic relationship between the ambient sound of the area in which the user is located and the audio currently being played by the speaker.
In one example, the positional relationship between the user and the speaker may include a distance relationship such as a distance relationship between the position of the user's ear and the position of the speaker.
In another example, the positional relationship between the user and the speaker may include: the height relation between the height value of the user and the height value of the speaker is, for example, the absolute value of the height difference. The height h of the user and the height h1 of the speaker can be obtained through sensing by an indoor sensing system in the area where the user is located. The absolute value h2 of the height difference is the absolute value of h 1-h.
In one example, the user's altitude value may be calculated by the following formula:
h= (height of user current height maximum-height of user head lowest position) ×2/3+height of user head lowest position;
the maximum current height of the user can be understood as the height of the highest position of the head of the user.
In one example, the height value of the speaker may be determined by the following formula:
h1=d*tanα+h0
wherein h1 is the height value of the loudspeaker, alpha is the angle between the loudspeaker and the horizontal plane in the loudspeaker, h0 is the height of the loudspeaker, and d is the horizontal distance between the loudspeaker and the user. D can be obtained through sensing by an indoor sensing system in the area where the user is located.
In one example, the sound relationship between the ambient sound of the area in which the user is located and the audio currently being played by the speaker includes a volume relationship, such as a volume difference relationship.
In another example, the sound relationship between the ambient sound of the area in which the user is located and the audio currently being played by the speaker includes: a sound energy value relationship between the sound energy value of the ambient sound and the sound energy value of the currently played audio, such as a difference relationship, a ratio relationship, etc. of the sound energy values. The sound energy value E1 of the environmental sound can be sensed by an indoor sensing system in the area where the user is located.
In one example, the sound energy value of the currently playing audio can be calculated by the following formula:
wherein E0 is the sound energy value of the current playing audio, a1 and a2 are preset coefficients, f is the frequency average value of the current playing audio, A max For the maximum amplitude, A, of the currently playing audio min The minimum value of the amplitude of the current playing audio is represented by T, which is the standard deviation of the duration determined based on the current playing audio. a1 and a2 can be set by those skilled in the art according to actual needs, for example, a1 can be between 1 and 2, and a2 can be between 330 and 360. In this embodiment, a1 may be 1.293 and a2 may be 346.
In one example, the standard deviation of the time period in the above formula can be determined as follows:
first, each rhythm point of the currently played audio is acquired. The rhythm point is determined according to the energy value of the currently played audio, and the time point with the larger energy value can be determined as one rhythm point. For example, an energy value threshold is preset, an energy value corresponding to each time point of the currently played audio is calculated, and if the energy value corresponding to a certain time point is greater than the energy value threshold, the time point can be taken as a rhythm point.
And secondly, calculating the time length between two adjacent rhythm points according to each rhythm point to obtain a time length sequence. That is, the duration between the next and previous tempo points is calculated to obtain a sequence such as {2 seconds, 4 seconds, 1 second … … }.
Then, a time standard deviation of each time in the time sequence is determined. In a specific implementation, the time standard deviation can be calculated by a calculation formula of the standard deviation. The standard deviation reflects the discrete degree of the data, so that the degree of density of the rhythm points of the currently played audio can be reflected through the standard deviation of the duration, and the smaller the standard deviation of the duration is, the denser the data is, namely, the rhythm biased to regularity is, the larger the standard deviation of the duration is, the more sparse the data is, namely, the rhythm is irregular.
In step 102, filter coefficients of the loudspeaker are determined based on the relationship, i.e. the sound relationship and/or the positional relationship. The filtering coefficient is mainly used for filtering the audio played in the loudspeaker in the area where the user is located, the higher the filtering coefficient is, the better the filtering effect is, the lower the filtering coefficient is, the filtering effect is poor, and the best sound focusing effect of the area where the user is located can be ensured after the loudspeaker is subjected to sound focusing through the adjustment of the filtering coefficient of the loudspeaker in the area where the user is located, so that the effect that the sound follows the place wherever the user is, namely, the body builder, is facilitated.
In one example, the cloud server may determine filter coefficients for the speakers based on the sound relationships. Specifically, the corresponding relation between the sound relation and the filter coefficient can be pre-stored in the cloud server, so that the filter coefficient of the loudspeaker in the area where the user is located is determined according to the pre-stored corresponding relation. The corresponding relation can be set by a person skilled in the art according to actual needs, and can also be obtained through big data analysis. For example, the sound relationship is the volume difference between the volume value of the environmental sound in the area where the user is located and the volume value of the audio currently played by the speaker, and the larger the volume difference is, the larger the filter coefficient is. It can be understood that the greater the volume difference, the greater the degree that the volume value of the environmental sound in the area where the user is located is higher than the volume value of the audio currently played by the speaker, the interference of the environmental sound needs to be counteracted by increasing the volume value of the audio currently played by the speaker, and the experience of the user for listening to the audio currently played by the speaker is improved.
In one example, the cloud server may determine filter coefficients of the speakers according to the positional relationship. Specifically, the corresponding relation between the sound relation and the filter coefficient can be pre-stored in the cloud server, so that the filter coefficient of the loudspeaker in the area where the user is located is determined according to the pre-stored corresponding relation. The corresponding relation can be set by a person skilled in the art according to actual needs, and can also be obtained through big data analysis.
In one example, the cloud server may determine filter coefficients for the speakers based on the positional relationship and the sound relationship. For example, the positional relationship is the absolute value h2 of the height difference described above, and the sound relationship is the energy ratio E2 between the sound energy value E1 of the environmental sound and the sound energy value E0 of the currently played audio. If E2 is less than 1, the filter coefficients may be E2 h2 If E2 is not less than 1, i.e., E2 is greater than or equal to 1, the filter coefficient is 0.
In one example, step 102 may also be implemented by the sub-steps of, referring to fig. 2, including:
step 201: and acquiring current motion data of the user, and determining an adjustment coefficient according to the motion data.
Step 202: and determining the filter coefficient of the loudspeaker according to the adjustment coefficient and the relation.
Wherein the motion data in step 201 includes a motion duration and/or a motion speed, the adjustment coefficient includes a first adjustment coefficient and/or a second adjustment coefficient, and determining the adjustment coefficient according to the motion data includes: determining a first adjustment coefficient corresponding to the time length difference value according to the time length difference value between the current time length of the user and the preset time length of the user; and/or determining a second adjustment coefficient corresponding to the speed difference according to the speed difference between the current movement speed of the user and the preset movement speed.
In one example, the motion data includes a motion duration and the adjustment factor includes a first adjustment factor. The first adjustment coefficient may be determined by: determining a time length difference value between the current motion time length of the user and a preset motion time length, and determining a first adjustment coefficient corresponding to the time length difference value according to a first corresponding relation between the time length difference value and the preset motion time length. The preset movement duration can be a desired movement duration, a user can preset the duration of the user desiring to move in advance, and the difference between the current movement duration of the user and the preset movement duration can reflect the difference between the actual movement duration of the user and the duration desiring to move. The first correspondence may be set by those skilled in the art according to actual needs, or may be obtained by big data analysis. For example, if the duration difference is the current motion duration minus the preset motion duration, the larger the duration difference is, the smaller the first adjustment coefficient is. That is, when the current movement time length is longer than the preset movement time length, it is indicated that the user has reached the expected movement time length, and the time length difference is a positive number, and the larger the current movement time length (the larger the positive time length difference is), the smaller the first adjustment coefficient is, so as to indicate that the user is expected to move slightly; when the current movement duration is smaller than the preset movement duration, the user is not expected to reach the movement duration, at the moment, the duration difference is a negative number, and the larger the current movement duration (the larger the negative duration difference is), the smaller the first adjustment coefficient is.
In one example, the motion data includes a motion velocity and the adjustment factor includes a second adjustment factor. The second adjustment coefficient may be determined by: determining a speed difference value between the current movement speed of the user and a preset movement speed, and determining a second adjustment coefficient corresponding to the speed difference value according to a second preset corresponding relation between the speed difference value and the preset movement speed. The preset movement speed can be a desired movement speed, the user can preset the speed of the user desiring to move in advance, and the difference between the current movement speed of the user and the preset movement speed can represent the difference between the actual movement speed of the user and the speed desiring to move. The second correspondence may be set by those skilled in the art according to actual needs, or may be obtained by big data analysis. When the current movement speed of the user is greater than the preset movement speed, the second adjustment coefficient is smaller and smaller along with the increase of the current movement speed. In a specific implementation, the second adjustment coefficient is smaller and smaller as the current movement speed increases.
In one example, if the energy ratio is less than 1, the cloud server may determine the filter coefficients of the speaker based on the first adjustment coefficient, the energy ratio, and the absolute value of the height difference. For example, it is determined by the following formula:
β=t1*E2 h2
Wherein, beta is a filter coefficient, t1 is a first adjustment coefficient, E2 is an energy ratio, and h2 is an absolute value of a height difference.
In one example, if the energy ratio is less than 1, the cloud server may determine a filter coefficient of the speaker based on the second adjustment coefficient, the energy ratio, and an absolute value of the height difference. For example, it is determined by the following formula:
β=t2*E2 h2
wherein, beta is a filter coefficient, t2 is a second adjustment coefficient, E2 is an energy ratio, and h2 is an absolute value of a height difference.
In one example, if the energy ratio is less than 1, the cloud server may determine the filter coefficients of the speaker based on the first adjustment coefficient, the second adjustment coefficient, the energy ratio, and the absolute value of the height difference. For example, it is determined by the following formula:
β=t1*t2*E2 h2
in one example, if the energy ratio is greater than or equal to 1, the cloud server may determine that the filter coefficient of the speaker is 0.
It should be noted that, in the above example, a determination manner of a filter coefficient of a speaker in a region where a user is located, that is, a sub-region is described, and in a specific implementation, the cloud server may determine filter coefficients of speakers in a plurality of sub-regions where a plurality of users are respectively located, that is, each speaker has its own filter coefficient.
In a specific implementation, after determining the filter coefficient of the speaker, the cloud server may send the filter coefficient to the speaker, so that the speaker may filter the audio that is currently played based on the filter coefficient, and focus the filtered audio to an area where the user is located, which is favorable for ensuring that the maximum position of the audio sound is located at the ear of the user and moves along with the user.
In one example, the cloud server may determine exercise equipment placed in the area where the user is located based on the area where the user is located, so that the exercise type is determined according to the determined exercise equipment, then audio with a proper rhythm is automatically selected from the audio play list of the personal database of the user through the exercise sign monitoring data of the user, and the audio is played through speakers corresponding to different filter coefficients, so that the user located below the speaker with the sound focusing system can listen to the audio.
In addition, the user can also connect the server of music cloud system through smart mobile phone App, and through App and music cloud system interaction, customization exclusive body-building song list, music play mode and music switching mode, wherein the duration of music can be selected to the music play mode, and the music switching mode can select to cut the song through initiatively pressing the bracelet button or through the exclusive singing gesture of cutting of the customization of the built-in acceleration sensor of bracelet.
In one example, an alarm alert may be made based on collected movement data of the user to avoid injury to the user. Such as voice prompts: you have continued to exercise for more than 40 minutes, to avoid injury, you are advised to rest for 5 minutes, and do leg stretch.
The above examples in this embodiment are examples for easy understanding and are not limited to the technical solution of the present invention.
In this embodiment, the positional relationship between the user and the speaker may reflect the distance between the user and the speaker, and the sound relationship between the environmental sound in the area where the user is located and the currently played audio may reflect the interference degree of the played audio actually heard by the user by the environmental sound, and by combining the filter coefficients determined by the positional relationship and/or the sound relationship, the filtered music is beneficial to adapting to the positional relationship between the user and the speaker and/or reducing the interference degree of the environmental sound to the played audio. Focusing the filtered audio to the region where the user is located, that is, focusing the filtered music to the region where the user is located, so that the user can hear the filtered music as much as possible, and other users in other regions can hardly hear the filtered music, thereby avoiding the influence of the music played by the loudspeaker arranged in the region where the user is located on other users in other regions, being beneficial to playing different music for the users in different regions through the loudspeaker arranged in different regions, and adapting to personal preferences of different users. According to the music playing method, a user can listen to music played by the user without wearing the earphone to be separated from the music playing equipment which needs to be carried, convenience is provided for the user, and when the user is in a body-building area, body-building experience of the user is improved.
A second embodiment of the present invention relates to an audio playing method. The implementation details of the audio playing method of the present embodiment are specifically described below, and the following is merely provided for understanding the implementation details, and is not necessary for implementing the present embodiment.
As shown in fig. 3, a flowchart of the audio playing method in this embodiment includes:
step 301: a relationship between the user and the speaker is determined.
Step 302: and determining the filter coefficient of the loudspeaker according to the relation.
Step 303: and filtering the audio currently played by the loudspeaker according to the filter coefficient, and focusing the filtered audio to the area where the user is located.
Steps 301 to 303 are substantially the same as steps 101 to 103 in the first embodiment, and are not repeated here.
Step 304: and when the audio currently played by the loudspeaker is played to a preset progress, acquiring the current heart rate value and the recommended heart rate value of the user.
Step 305: and determining the audio played by the loudspeaker after the audio played currently is played completely based on the list according to the current heart rate value and the recommended heart rate value.
The preset schedule in step 304 may be set according to actual needs, for example, set to 90%.
In one example, the cloud server may obtain the user's current heart rate value through a wristband worn by the user, or may obtain the user's current heart rate value through a fitness device used by the user. If the current heart rate value measured by the bracelet and the current heart rate value measured by the exercise equipment are obtained simultaneously and the current heart rate values measured by the two equipment are different, one maximum heart rate value can be selected as the current heart rate value of the user.
In one example, the recommended heart rate value of the user may be obtained by calculating the following formula:
recommended heart rate value = heart rate at registration + (B1-b2 x age-heart rate at login) × [ b3+ exercise age/(age-B4) ].
Wherein the registration time rate value can be understood as: the heart rate value acquired when the user registers as a member in the gymnasium, the heart rate during login can be understood as the heart rate value when the user logs in the cloud server after entering the gymnasium, and the exercise age can be the age selected by the user after logging in the cloud server. Wherein B1, B2, B3 and B4 are preset coefficients, the value range of B1 can be 190-250, the value range of B2 can be 0-1, the value range of B3 can be 0-1, and the value range of B4 can be 5-15. In one example, the values of the several coefficients may be: b1 =220, b2=2/3, b3=0.6, b4=10.
In one example, the list in step 305 may be a user-corresponding playlist, such as a personal database created by a music cloud system, including play lists, music style preferences, and audio play settings preferences. If the user has preset the playlist, the playlist is read. If the user does not set the playlist, the age, occupation (the data are filled in when the user registers through the bracelet) and the like of the user are acquired, and n songs which are similar to the user and are liked to be listened to by the crowd are determined and used as the playlist corresponding to the user.
In one example, when the cloud server determines that the currently played song (i.e., audio) is played to 90% (the preset progress), the cloud server may determine, in combination with the current heart rate value and the recommended heart rate value of the user, whether the next song of the current song in the playlist corresponding to the user is reserved or deleted. If the judgment result is reserved, namely the next song in the current song in the play list is the song to be played by the loudspeaker after the current song is played; if the determination result is deletion, then continuing to determine whether the next song is reserved or deleted. That is, if there is a reserved song in the subsequent songs in the playlist, the next song is played, if there is no reserved song in the playlist, a song is randomly selected from songs played by users similar to the user, whether to reserve is determined, if reserved, the song is added to the playlist as the next song to be played, and if not reserved, a song is randomly selected again until the next song is found.
In one example, the manner of determining the audio played by the speaker after the audio currently played is played may be: if the current heart rate value is larger than the recommended heart rate value, determining a target audio closest to the current played audio based on the list, and taking the target audio as the audio played by the loudspeaker after the current played audio is played; the recommended playing value of the target audio is smaller than or equal to a preset playing threshold value, and the recommended playing value is used for representing the excitation degree. The latest target audio can sequentially determine whether the audio in the list is the target audio according to the order of the audio in the list, and after the latest target audio is found, the judgment of whether other audio in the list is the target audio is not needed, so that the speed of selecting the target audio is increased.
In one example, the target audio closest to the currently played audio determined based on the list may be the target audio closest to the currently played audio in each audio listed in the list after the currently played audio, which is beneficial to avoiding the audio played before from being played again and improving the hearing experience of the user.
In another example, the target audio closest to the currently playing audio determined based on the list may be the target audio closest to the currently playing audio in each audio except the currently playing audio in the list, that is, the target audio may be located before or after the currently playing audio. The method is beneficial to expanding the range of searching the target audio and improving the possibility of searching the target audio.
In one example, if there is additional audio between the target audio and the currently playing audio, the audio between the target audio and the currently playing audio is deleted.
In one example, the manner in which the target audio is found may be with reference to fig. 4: comprising the following steps:
step 401: judging whether the current heart rate value is larger than the recommended heart rate value or not; if so, step 402 is performed, otherwise step 403 is performed.
Step 402: and acquiring a recommended playing value of the next audio of the audio currently played in the list.
The recommended playing value is used for representing the excitation degree of the next audio, and the higher the excitation degree is, the larger the recommended playing value is.
In one example, the recommended play value may be determined by: the cloud server acquires each rhythm point of the audio with the recommended playing value to be determined in the list, and calculates the time length between two adjacent rhythm points according to each rhythm point to obtain a time length sequence; determining a time standard deviation of each time in the time sequence; and determining a recommended play value according to the standard deviation of the duration and the label value for representing the excitation degree. The calculation method of the standard deviation of the duration of one audio has been described in the first embodiment, and is not described herein again to avoid repetition. The following mainly describes a manner of determining a recommended play value of the next audio according to a standard deviation of a duration and a tag value for representing the excitement level of the next audio:
In one example, the recommended play value for the next audio may be the product of the standard deviation of the time length and the tag value used to characterize the excitement level of the next audio. The label is a value obtained according to the corresponding relation between the preset label and the excitation degree, and the smaller the value is, the more excitation is indicated. The tag value is obtained according to big data analysis, or may be set manually, which is not particularly limited in this embodiment.
Step 403: the next audio is determined to be the target audio.
That is, if the current heart rate value is greater than the recommended heart rate value and the recommended play value is less than or equal to the preset play threshold, an audio is retained in the playlist.
Step 404: judging whether the recommended playing value is smaller than or equal to a preset playing threshold value; if so, step 403 is performed, otherwise step 405 is performed.
The preset playing threshold may be set according to actual needs, which is not specifically limited in this embodiment.
Step 405: the target audio continues to be found in the list.
That is, if the current heart rate value is greater than the recommended heart rate value and the recommended play value is greater than the preset play threshold, the next audio may be deleted, whether the audio following the next audio in the play list is reserved or deleted is continuously determined, and if the reserved audio is obtained, the reserved audio is used as the target audio. If the target audio does not exist in the list, a song can be randomly selected from the audio played by the user similar to the user to determine whether the song is reserved, if so, the song is added into the list to serve as the target audio, and if not, the song is randomly selected until the target audio is found.
In another example, the manner of determining the audio played by the speaker after the audio currently played is played may be: obtaining heart rate difference values of the current heart rate value and the recommended heart rate value, determining audio matched with the heart rate difference values according to recommended playing values of all the audios except the audio currently played in the list, and taking the matched audio as the audio played by the loudspeaker after the audio currently played is played, wherein the recommended playing values are used for representing the excitation degree. The calculation method of the recommended play value is already described above, and will not be described here again. The method is beneficial to referencing all the audios except the audio currently played in the list, so that the audio which is most matched with the heart rate difference value is obtained, and the played audio can be well adapted to the current heart rate of the user.
For example, a preset relationship between the heart rate difference value and the recommended playing value may be preset, a target recommended playing value corresponding to the heart rate difference value of the user is selected according to the preset relationship, the recommended playing value of each audio except the audio currently playing in the list is obtained, and the recommended playing value of the finally matched audio is equal to or close to the target recommended playing value. If the number of the finally matched audio is multiple, one audio which is played by the loudspeaker after the current playing audio is played is randomly selected from the multiple audio. The preset relation between the heart rate difference value and the recommended playing value can be set according to actual needs, and can also be obtained through big data analysis, which is not particularly limited in the embodiment.
By the method, the fact that the currently played audio is matched with the current heart rate value of the user can be guaranteed, songs with low excitement degree, such as soothing songs, are played under the condition that the current heart rate value is high, songs with high excitement degree, such as rock songs, are played under the condition that the current heart rate value is low, and the improvement of the exercise experience of the user is facilitated.
In this embodiment, the user can disengage from the music playing device that needs to be carried about, and replace the audio played by the speaker according to the motion type corresponding to each sub-region and the current heart rate value when the user exercises, and filter the audio played by the speaker by combining the filter coefficients, so that the sound size and sound direction of the audio can be adjusted, the user is not required to perform distraction operation, and the user exercise experience is greatly improved.
In one example, the indoor sensing system may report sensing data to the cloud end and the gym back end, and the wristband worn by the user reports the collected user data to the cloud end (i.e., cloud server) and the gym back end. The gym background terminal plays corresponding background music for the gym area through the music cloud system. At the moment, the cloud can select music with proper rhythm for playing according to the exercise type, exercise intensity and exercise sign monitoring data transmitted by the bracelet of the user, the music can be played when the user starts to exercise, the music is gradually stopped when the exercise is stopped, the user is separated from the music playing equipment when the exercise is performed, and the user experience when the exercise is performed is improved. The body-building song list can be edited through the intelligent mobile phone app at ordinary times, suitable songs in the song list can be automatically played during body-building, and then random music suitable for user preference and movement rhythm is automatically switched. The cloud can provide a voice training course according to the exercise plan of the user, so that the user can exercise with the voice training course.
In one example, a exercise data acquisition system is arranged on the exercise equipment, a wireless communication module is further arranged in the system, user exercise data acquired by the exercise equipment can be uploaded to a song selection system of the cloud, accuracy of song selection is improved, and an exercise file of a user can be built in a personal database to record the exercise data of the user. Optionally, according to the exercise data of the user, background music of a 'professional training course' can be made, and professional guidance during single exercise is performed.
The above examples in this embodiment are examples for easy understanding and are not limited to the technical solution of the present invention.
In this embodiment, by combining the excitement degree of the next audio, the current heart rate value of the user and the recommended heart rate value of the user, it can be more reasonably determined whether the next audio is the audio to be played by the speaker after the current audio is played. For example, the current heart rate value is larger than the recommended heart rate value, that is, the current heart rate value is larger, in order to enable the current heart rate value of the user to be reduced, if the excitation degree of the next audio is lower, the next audio can be used as the audio to be played by the next loudspeaker, the possibility that the heart rate value is reduced after the user listens to the next audio is improved, and the exercise experience of the user is improved.
The above steps of the methods are divided, for clarity of description, and may be combined into one step or split into multiple steps when implemented, so long as they include the same logic relationship, and they are all within the protection scope of this patent; it is within the scope of this patent to add insignificant modifications to the algorithm or flow or introduce insignificant designs, but not to alter the core design of its algorithm and flow.
A third embodiment of the invention relates to an electronic device, as shown in fig. 5, comprising at least one processor 501; and a memory 502 communicatively coupled to the at least one processor 501; the memory 502 stores instructions executable by the at least one processor 501, and the instructions are executed by the at least one processor 501, so that the at least one processor 501 can execute the audio playing method in the first embodiment or the second embodiment.
Where the memory 502 and the processor 501 are connected by a bus, the bus may comprise any number of interconnected buses and bridges, the buses connecting the various circuits of the one or more processors 501 and the memory 502. The bus may also connect various other circuits such as peripherals, voltage regulators, and power management circuits, which are well known in the art, and therefore, will not be described any further herein. The bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or may be a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor 501 is transmitted over a wireless medium via an antenna, which further receives the data and transmits the data to the processor 501.
The processor 501 is responsible for managing the bus and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And memory 502 may be used to store data used by processor 501 in performing operations.
A fourth embodiment of the present application relates to a computer-readable storage medium storing a computer program. The computer program implements the above-described method embodiments when executed by a processor.
That is, it will be understood by those skilled in the art that all or part of the steps in implementing the methods of the embodiments described above may be implemented by a program stored in a storage medium, where the program includes several instructions for causing a device (which may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps in the methods of the embodiments of the application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples of carrying out the application and that various changes in form and details may be made therein without departing from the spirit and scope of the application.

Claims (11)

1. An audio playing method, comprising:
determining a relationship between a user and a speaker; wherein the speaker is disposed in an area where the user is located, and the relationship includes: the position relation between the user and the loudspeaker and the sound relation between the environment sound of the area where the user is located and the audio currently played by the loudspeaker;
determining filter coefficients of the loudspeaker according to the relation;
filtering the audio currently played by the loudspeaker according to the filter coefficient, and focusing the filtered audio to the area where the user is located;
wherein the loudspeaker is a loudspeaker with an acoustic focusing system.
2. The audio playing method according to claim 1, wherein the determining the filter coefficients of the speaker according to the relation includes:
acquiring current motion data of the user, and determining an adjustment coefficient according to the motion data;
and determining the filter coefficient of the loudspeaker according to the adjustment coefficient and the relation.
3. The audio playing method according to claim 2, wherein the motion data comprises a motion duration and/or a motion speed, the adjustment coefficients comprise a first adjustment coefficient and/or a second adjustment coefficient, and the determining the adjustment coefficients according to the motion data comprises:
Determining a first adjustment coefficient corresponding to a time length difference value according to the time length difference value between the current time length of the user and a preset time length of the user; and/or the number of the groups of groups,
and determining a second adjustment coefficient corresponding to the speed difference value according to the speed difference value between the current movement speed of the user and the preset movement speed.
4. The audio playing method according to claim 3, wherein the positional relationship includes a height relationship between a height value of the user and a height value of the speaker; and/or the number of the groups of groups,
the sound relationship includes a sound energy value relationship between a sound energy value of the ambient sound and a sound energy value of the currently played audio.
5. The audio playing method according to claim 4, wherein the positional relationship includes the height relationship, the height relationship is an absolute value of a height difference, the sound relationship includes the sound energy value relationship, the sound energy value relationship is an energy ratio, the motion data includes a motion duration and a motion speed, the adjustment coefficient includes a first adjustment coefficient and a second adjustment coefficient, and the determining the filter coefficient of the speaker according to the adjustment coefficient and the relationship includes:
If the energy ratio is less than 1, the filter coefficient is determined by the following formula:
β=t1*t2*E2 h2
the β is the filter coefficient, the t1 is the first adjustment coefficient, the t2 is the second adjustment coefficient, the E2 is the energy ratio, and the h2 is the absolute value of the height difference.
6. The audio playing method according to claim 1, wherein when the audio currently played by the speaker is played to a preset progress, the method further comprises:
acquiring a current heart rate value and a recommended heart rate value of the user;
if the current heart rate value is larger than the recommended heart rate value, determining a target audio closest to the current played audio based on a list, and taking the target audio as the audio played by the loudspeaker after the current played audio is played; or, obtaining a heart rate difference value between the current heart rate value and the recommended heart rate value, determining an audio matched with the heart rate difference value according to recommended playing values of all the audios except the currently played audio in a list, and taking the matched audio as the audio played by the loudspeaker after the currently played audio is played;
The recommended playing value of the target audio is smaller than or equal to a preset playing threshold, and the recommended playing value is used for representing the excitation degree.
7. The audio playing method according to claim 6, wherein the recommended playing value is obtained by:
acquiring each rhythm point of the audio of the recommended playing value to be determined in the list;
according to each rhythm point, calculating the time length between two adjacent rhythm points to obtain a time length sequence;
determining the standard deviation of the time length of each time length in the time length sequence;
and determining the recommended playing value according to the time standard deviation and the label value for representing the excitation degree.
8. The audio playing method according to claim 1, wherein the relationship includes the positional relationship including a height relationship between a height value of the user and a height value of the speaker, the height value of the speaker being determined by the following formula:
h1=d*tanα+h0
wherein h1 is a height value of the speaker, α is an angle between a speaker in the speaker and a horizontal plane, h0 is a height of the speaker, and d is a horizontal distance between the speaker and the user.
9. The audio playback method as recited in claim 4, wherein the relationship comprises the sound relationship, the sound relationship comprising a sound energy value relationship between a sound energy value of the ambient sound and a sound energy value of the currently played audio, the sound energy value of the currently played audio being determined by the following formula:
wherein E0 is the sound energy value of the current playing audio, a1 and a2 are both preset coefficients, f is the frequency average value of the current playing audio, A max For the maximum amplitude value, A, of the currently playing audio min And T is the standard deviation of the duration determined based on the currently played audio.
10. An electronic device, comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the audio playback method of any one of claims 1 to 9.
11. A computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the audio playback method of any one of claims 1 to 9.
CN202110090450.4A 2021-01-22 2021-01-22 Audio playing method, electronic device and storage medium Active CN112765395B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110090450.4A CN112765395B (en) 2021-01-22 2021-01-22 Audio playing method, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110090450.4A CN112765395B (en) 2021-01-22 2021-01-22 Audio playing method, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN112765395A CN112765395A (en) 2021-05-07
CN112765395B true CN112765395B (en) 2023-09-19

Family

ID=75706723

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110090450.4A Active CN112765395B (en) 2021-01-22 2021-01-22 Audio playing method, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN112765395B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113536027A (en) * 2021-07-27 2021-10-22 咪咕音乐有限公司 Music recommendation method, device, equipment and computer readable storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1402859A (en) * 1999-12-01 2003-03-12 西尔弗布鲁克研究有限公司 Audio player with code sensor
CN104571469A (en) * 2013-10-12 2015-04-29 华为技术有限公司 Method, device and terminal for outputting sound signals
CN104936125A (en) * 2015-06-18 2015-09-23 三星电子(中国)研发中心 Method and device for realizing surround sound
CN104978038A (en) * 2015-03-12 2015-10-14 齐鲁工业大学 Novel laser keyboard input type music playing system
CN105721973A (en) * 2016-01-26 2016-06-29 王泽玲 Bone conduction headset and audio processing method thereof
CN106687958A (en) * 2016-12-08 2017-05-17 深圳市汇顶科技股份有限公司 Audio playing device, system and method
CN107005764A (en) * 2014-11-21 2017-08-01 三星电子株式会社 Earphone with activity control output
CN108710486A (en) * 2018-05-28 2018-10-26 Oppo广东移动通信有限公司 Audio frequency playing method, device, earphone and computer readable storage medium
CN110049403A (en) * 2018-01-17 2019-07-23 北京小鸟听听科技有限公司 A kind of adaptive audio control device and method based on scene Recognition
CN110979178A (en) * 2019-12-16 2020-04-10 中国汽车工程研究院股份有限公司 Intelligent vehicle driver voice reminding device based on sound focusing
CN111630879A (en) * 2018-01-19 2020-09-04 诺基亚技术有限公司 Associated spatial audio playback

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9031939B2 (en) * 2007-10-03 2015-05-12 Peter Neal Nissen Media sequencing method to provide location-relevant entertainment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1402859A (en) * 1999-12-01 2003-03-12 西尔弗布鲁克研究有限公司 Audio player with code sensor
CN104571469A (en) * 2013-10-12 2015-04-29 华为技术有限公司 Method, device and terminal for outputting sound signals
CN107005764A (en) * 2014-11-21 2017-08-01 三星电子株式会社 Earphone with activity control output
CN104978038A (en) * 2015-03-12 2015-10-14 齐鲁工业大学 Novel laser keyboard input type music playing system
CN104936125A (en) * 2015-06-18 2015-09-23 三星电子(中国)研发中心 Method and device for realizing surround sound
CN105721973A (en) * 2016-01-26 2016-06-29 王泽玲 Bone conduction headset and audio processing method thereof
CN106687958A (en) * 2016-12-08 2017-05-17 深圳市汇顶科技股份有限公司 Audio playing device, system and method
CN110049403A (en) * 2018-01-17 2019-07-23 北京小鸟听听科技有限公司 A kind of adaptive audio control device and method based on scene Recognition
CN111630879A (en) * 2018-01-19 2020-09-04 诺基亚技术有限公司 Associated spatial audio playback
CN108710486A (en) * 2018-05-28 2018-10-26 Oppo广东移动通信有限公司 Audio frequency playing method, device, earphone and computer readable storage medium
CN110979178A (en) * 2019-12-16 2020-04-10 中国汽车工程研究院股份有限公司 Intelligent vehicle driver voice reminding device based on sound focusing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
公共环境下的混合型音乐推荐系统的关键技术研究;陈雅茜 等;《计算机应用研究》;20121130;第29卷(第11期);4250-4253 *

Also Published As

Publication number Publication date
CN112765395A (en) 2021-05-07

Similar Documents

Publication Publication Date Title
US11729565B2 (en) Sound normalization and frequency remapping using haptic feedback
CN103680545B (en) Audio frequency broadcast system and its control method for playing back
US20060288846A1 (en) Music-based exercise motivation aid
US9779751B2 (en) Respiratory biofeedback devices, systems, and methods
JP5394532B2 (en) Localized audio network and associated digital accessories
RU2454259C2 (en) Personal training device using multidimensional spatial audio signal
CN106535044A (en) Intelligent sound equipment playing control method and music playing control system
CN109413537A (en) Audio signal playback method, device and earphone
US20030033600A1 (en) Monitoring of crowd response to performances
CN108429972B (en) Music playing method, device, terminal, earphone and readable storage medium
CN103886857B (en) A kind of noise control method and equipment
CN106648524A (en) Audio paying method and audio playing equipment
CN104460982A (en) Presenting audio based on biometrics parameters
TW201820315A (en) Improved audio headset device
CN1732713B (en) Audio reproduction apparatus, feedback system and method
EP1128358A1 (en) Method of generating an audio program on a portable device
US8358786B2 (en) Method and apparatus to measure hearing ability of user of mobile device
CN107272900A (en) A kind of wearable music player of autonomous type
CN109918039A (en) A kind of volume adjusting method and mobile terminal
CN112765395B (en) Audio playing method, electronic device and storage medium
CN106210266A (en) A kind of acoustic signal processing method and audio signal processor
JP4517401B2 (en) Music playback apparatus, music playback program, music playback method, music selection apparatus, music selection program, and music selection method
KR102290587B1 (en) Wearable audio device and operation method thereof
CN110267155A (en) A kind of control method and speaker of speaker
CN106446152A (en) Audio file recommendation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant