WO2017215649A1 - Procédé d'ajustement d'effet sonore et terminal d'utilisateur - Google Patents

Procédé d'ajustement d'effet sonore et terminal d'utilisateur Download PDF

Info

Publication number
WO2017215649A1
WO2017215649A1 PCT/CN2017/088671 CN2017088671W WO2017215649A1 WO 2017215649 A1 WO2017215649 A1 WO 2017215649A1 CN 2017088671 W CN2017088671 W CN 2017088671W WO 2017215649 A1 WO2017215649 A1 WO 2017215649A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
sound effect
effect parameter
identity information
user terminal
Prior art date
Application number
PCT/CN2017/088671
Other languages
English (en)
Chinese (zh)
Inventor
李亚军
涂广
甘高亭
杨海
Original Assignee
广东欧珀移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广东欧珀移动通信有限公司 filed Critical 广东欧珀移动通信有限公司
Publication of WO2017215649A1 publication Critical patent/WO2017215649A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/725Cordless telephones

Definitions

  • the present invention relates to the field of electronic technologies, and in particular, to a sound effect adjustment method and a user terminal.
  • user terminals such as mobile phones and tablets have become an indispensable part of people's lives. People can not only use the user terminal for daily communication, but also various entertainments, such as playing games, surfing the Internet, playing audio and video, and so on.
  • most user terminals have a built-in sound equalizer, and the sound effect can be adjusted by the sound equalizer. Once the adjustment is completed, the user terminal will play the audio with the sound effect parameter until the next adjustment.
  • it is difficult for a sound effect parameter to meet the needs of all users at the same time. For a particular group of people, especially elderly people or users with poor hearing, the requirements for sound effects are more stringent.
  • the embodiment of the invention discloses a sound effect adjustment method and a user terminal, which can automatically switch different sound effects for different users.
  • the first aspect of the embodiment of the present invention discloses a sound effect adjustment method, including:
  • the target sound effect parameter is loaded for audio playback.
  • a second aspect of the embodiment of the present invention discloses a user terminal, including:
  • a detecting unit configured to detect whether a target application in the user terminal has a user login
  • a first acquiring unit configured to acquire target identity information of the user when the detecting unit detects that the target application has a user login
  • a second acquiring unit configured to acquire, according to the target identity information, a target sound effect parameter corresponding to the target identity information
  • a loading unit configured to load the target sound effect parameter for audio playback when the user terminal receives an audio output instruction.
  • a third aspect of the embodiments of the present invention discloses a user terminal, including a processor and a memory, wherein the memory is used to store programs and data, and the processor is configured to invoke the program stored in the memory to perform the implementation of the present invention.
  • a fourth aspect of an embodiment of the present invention discloses a computer readable storage medium storing a computer program for electronic data exchange, wherein the computer program causes a computer to perform the method as described in the first aspect above.
  • a fifth aspect of an embodiment of the present invention discloses a computer program product comprising a non-transitory computer readable storage medium storing a computer program, the computer program being operative to cause a computer to perform the first aspect as described above Said method.
  • the target identity information of the user when detecting that the target application in the user terminal has a user login, the target identity information of the user may be acquired, and the target sound effect parameter matching the target identity information is obtained according to the target identity information, when the user When the terminal receives the audio output command, the target sound effect parameter is loaded for audio playback.
  • the corresponding sound effect parameter can be loaded by identifying the user's identity information, thereby automatically switching different sound effects for different users, improving the operation convenience, and effectively improving the user experience and the hearing effect.
  • FIG. 1 is a schematic flow chart of a sound effect adjustment method according to an embodiment of the present invention.
  • FIG. 2 is a schematic flow chart of another sound effect adjustment method disclosed in an embodiment of the present invention.
  • FIG. 3 is a schematic structural diagram of a user terminal according to an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of another user terminal according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of another user terminal according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of another user terminal according to an embodiment of the present disclosure.
  • FIG. 7 is a schematic structural diagram of another user terminal according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic structural diagram of still another user terminal according to an embodiment of the present invention.
  • a sound effect parameter is difficult to meet the needs of all users at the same time. For special people, especially the elderly or poorly hearing users, the requirements for sound effects are more stringent. In order to meet different user requirements, it is necessary to manually set different sound effect parameters frequently, which makes the operation cumbersome and time consuming.
  • the embodiment of the present invention discloses a sound effect adjustment method and a user terminal, which can load corresponding sound effect parameters by recognizing the user's identity information, thereby automatically switching different sound effects for different users, thereby effectively improving the user experience and the auditory effect. .
  • the details are described below separately.
  • FIG. 1 is a schematic flowchart diagram of a sound effect adjustment method according to an embodiment of the present invention. As shown in FIG. 1, the sound adjustment method may include the following steps:
  • step 101 Detect whether the target application in the user terminal has a user login. If yes, execute step 102; if no, execute step 105.
  • the user terminal may include a mobile phone, a tablet, a palmtop computer, a personal digital assistant (PDA), and a mobile internet device (Mobile).
  • a mobile phone such as an Internet device (MID), a multimedia player (such as an MP3, a CD player, etc.), and a smart wearable device (such as a smart watch, a smart wristband, etc.)
  • MID Internet device
  • multimedia player such as an MP3, a CD player, etc.
  • a smart wearable device such as a smart watch, a smart wristband, etc.
  • the target application in the user terminal may be an application that is provided in the user terminal, or may be a third-party application that is downloaded and installed, which is not limited in the embodiment of the present invention. For example, detecting whether the audio player in the user terminal has a user login. It is possible to detect in real time whether the target application in the user terminal has a user login, or to detect whether the target application in the user terminal has a user login at a specific time. In addition, it is also possible to detect whether there is a user login in the user terminal, that is, before using the user terminal, the user terminal can be logged in first.
  • the specific implementation manner of the step 101 of detecting whether the target application in the user terminal has a user login may include: detecting whether a login operation triggered by the user is received on the login interface of the target application in the user terminal, The login operation carries the login information of the user; if it is received, it is determined whether the login information input by the user matches the preset login information, and if it matches, it is detected that the target application in the user terminal has the user login.
  • the target identity information of the user may be acquired.
  • the user Before the user logs in, the user can register on the target application platform in the user terminal, and can be registered by using a user name (or login account) and a password.
  • the user name can be a nickname or email address input by the user.
  • the password may include, but is not limited to, at least one of a text string password, a gesture password, and a biometric information password, and the biometric information may include, but is not limited to, facial feature information, fingerprint information, iris information, retina information, and voiceprint information.
  • you can also enter the user's age (which can be used to know which age layer the user is in), gender, preferences, and more.
  • the target identity information of the user when the user is successfully logged in, the target identity information of the user may be obtained, where the target identity information may include, but is not limited to, at least one of user name, age, gender, and the like.
  • the target sound effect parameter matching the target identity information may be further obtained, where the target sound effect parameter may be included.
  • the target sound effect parameter may be included.
  • volume values sound styles (such as soundtrack, rock, classical, pop, jazz, etc.), scene modes (such as concert hall mode, room mode, headphone mode, KTV mode, etc.), stereo, single/double sound
  • sound styles such as soundtrack, rock, classical, pop, jazz, etc.
  • scene modes such as concert hall mode, room mode, headphone mode, KTV mode, etc.
  • stereo single/double sound
  • At least one of the information such as the Tao. You can adjust the volume value through the volume control, and adjust the sound style, scene mode and other information through the sound equalizer.
  • the target sound effect parameters corresponding to different target identity information may be different.
  • different sound effect parameters may be set for different users.
  • the corresponding sound effect parameters can be set.
  • the method described in FIG. 1 may further include the following steps:
  • the sound effect parameter list includes mapping relationship between different user identity information and sound effect parameters
  • the specific implementation manner of the step 103 for acquiring the target sound effect parameter corresponding to the target identity information according to the target identity information may include the following steps:
  • the corresponding sound effect parameter may be set for the user, and the sound effect parameter is stored in the sound effect parameter list, and the sound effect parameter list may include identity information of all registered users.
  • the sound effect parameters corresponding to the different identity information may be different from the sound effect parameters corresponding thereto.
  • the user can also change the preset sound effect parameters according to his own needs or preferences. At this time, the sound effect parameters corresponding to the user in the sound effect parameter list can be updated accordingly.
  • the method for setting the sound effect parameter may include the following steps:
  • the identity information of the user may include at least one of a username, a gender, an age, and the like.
  • the user can select a set of sound effect parameters existing in the user terminal as the corresponding sound effect parameter.
  • the user terminal can recommend some suitable sound effect parameters according to the user's identity information, for example, when it is an elderly person, recommend some more.
  • the sound effect parameter suitable for the elderly; the sound effect parameter may also be a user's manual adjustment of the sound effect parameter according to the line setting.
  • the target sound effect parameter is loaded for audio playback.
  • the acquired target sound effect parameter may be loaded, and the target sound effect parameter is replaced with the current sound effect parameter of the user terminal to perform audio playback.
  • the user terminal receives the audio output command, which may be triggered by the user (such as the user clicking to play an audio file), or may be triggered by the user terminal itself (such as when the alarm sounds, the incoming call, etc.).
  • the default sound effect parameter is loaded for audio playback.
  • the default sound effect parameter in the user terminal may be loaded to perform audio playback.
  • the default sound effect parameter may be the original sound effect parameter of the user terminal, such as a fixed sound effect parameter preset by the user; or the sound effect parameter adjusted by the user terminal before, for example, the sound effect parameter used when the user logs in the previous time.
  • the target identity information of the user may be acquired, and the target sound effect parameter matching the target identity information is acquired according to the target identity information.
  • the target sound effect parameter is loaded for audio playback.
  • FIG. 2 is a schematic flowchart diagram of another sound effect adjustment method according to an embodiment of the present invention. As shown in FIG. 2, the sound adjustment method may include the following steps:
  • step 201 Detect whether the target application in the user terminal has a user login. If yes, execute step 202; if no, execute step 206.
  • the target identity information may include, but is not limited to, at least one of a username (or login account), age, gender, and the like.
  • the target data may include, but is not limited to, at least one of location information of the user terminal, current system time of the user terminal, a current scene mode of the user terminal, and a volume value of the environment in which the user terminal is currently located.
  • the location information of the user terminal may be the location information of the current location of the user terminal, and may be obtained by using a GPS (Global Positioning System) in the user terminal, or may be acquired by using a base station. It can also be obtained by Wi-Fi positioning, etc., which is not limited by the embodiment of the present invention.
  • the current location information of the user terminal may be represented by a latitude and longitude coordinate, or may be a specific actual address, such as a province, a street, a house number, and the like in which the terminal is located.
  • the location information of the user terminal may also be location information corresponding to the geographic location with the highest historical activity frequency of the user terminal, and the activity of the user terminal in each geographical location may be counted within a preset time (eg, within one month, within one week, within one day, etc.). The number of times and/or duration of the event, and the location information for the location with the most activity and/or the longest duration.
  • the location information of the user terminal may be a specific location or a location range, which is not limited in the embodiment of the present invention.
  • the current system time of the user terminal may be the time currently output by the user terminal, such as 8:30 and 19:00.
  • the current scene mode of the user terminal may include, but is not limited to, a standard mode, a conference mode, an airplane mode, a silent mode, and the like.
  • the volume value of the environment in which the user terminal is currently located is the loudness of the ambient noise.
  • the unit can be decibel and can be collected and evaluated through the microphone of the user terminal.
  • the sound effect parameters corresponding to the same target identity information under different target data may be different.
  • the method described in FIG. 2 may further include the following steps:
  • the sound effect parameter list includes mapping relationship between different user identity information and sound effect parameters
  • the specific implementation manner of the step 204 for acquiring the target sound effect parameter corresponding to the target identity information and the target data according to the target identity information may include the following steps:
  • step 22) acquire, according to the target identity information, a target corresponding to the target identity information and the target data from the sound effect parameter list.
  • the specific implementation of the sound effect parameter may include the following steps:
  • the sound effect parameters corresponding to the same identity information under different location information may be different.
  • the sound effect parameter list may include a correspondence relationship between identity information, location information, and sound effect parameters of different users.
  • the sound effects of the same user under different location information may be different.
  • the sound parameters of the user at home and in the office may be set to be different.
  • the location information of the user terminal is further considered on the basis of the identity information, so that the sound effect parameter can be changed according to the location information, so that the sound effect parameter is more suitable for the user, more humanized, and the sound effect is more effective. Better.
  • step 22) acquiring, according to the target identity information, the target identity information and the target data from the sound effect parameter list.
  • the specific implementation of the target sound effect parameter may include the following steps:
  • the corresponding audio information may have different sound effect parameters in different time ranges.
  • the sound effect parameter list may include a correspondence relationship between the identity information, the time range, and the sound effect parameter of different users.
  • the sound effects parameters of the same user in different time ranges may be different.
  • the sound effect parameters corresponding to the user at 8:00-11:00 am and 21:00-24:00 may be set to be different, or the user is The sound effects parameters corresponding to the rest day and the work day can be set to different.
  • the system time of the user terminal is further considered on the basis of the identity information, and the sound effect parameter can be changed with time, so that the sound effect parameter is more suitable for Users are more user-friendly and have better sound effects.
  • step 22) acquiring, according to the target identity information, the target identity information and the target data from the sound effect parameter list.
  • the specific implementation of the target sound effect parameter may include the following steps:
  • the same identity information has different sound effect parameters in different scene modes.
  • the sound effect parameter list may include a correspondence relationship between the identity information, the scene mode, and the sound effect parameter of different users.
  • the sound effects parameters corresponding to the same user in different scene modes may be different.
  • the sound effect parameters corresponding to the user in the standard mode and the conference mode may be set to be different.
  • the context mode of the user terminal is further combined on the basis of the identity information, so that the sound effect parameter can be changed according to the different scene modes, so that the sound effect parameter is more suitable for the user, more humanized, and the sound effect is more effective. Better.
  • step 22) acquiring, according to the target identity information, the target identity information and the target data from the sound effect parameter list.
  • the specific implementation manner of the corresponding target sound effect parameter may include the following steps:
  • the corresponding audio information parameters of the same identity information may be different under different volume ranges.
  • the sound effect parameter list may include a correspondence relationship between the identity information, the volume range, and the sound effect parameter of different users.
  • the sound effects parameters corresponding to different volume ranges of the same user may be different.
  • the sound effects of the same user corresponding to the ambient volume value of 0-20 decibels and 20-40 decibels may be set to be different.
  • the volume value of the external environment is further combined on the basis of the identity information, so that the sound effect parameter can be changed according to the ambient volume value, so that the sound effect parameter is more suitable for the user, more humanized, and the sound effect is more effective. Better.
  • the target sound effect parameter set by the user when the user terminal is connected to the network, the target sound effect parameter set by the user may be uploaded to the server, so that other users can directly select the sound effect parameter as their own in a similar environment without manually performing the manual operation. Settings.
  • the target sound effect parameters can be changed at any time. After the change is completed, the sound effect parameter list will also be updated accordingly.
  • the user terminal may upload the target sound effect parameter to the server, and if the data in the user terminal is cleared, the user may re-use the user terminal in the user terminal.
  • the target identity information is logged in, and the target sound effect parameter corresponding to the target identity information is downloaded from the server, thereby avoiding the need to reset the sound effect parameter due to data loss in the user terminal.
  • the default sound effect parameter is loaded for audio playback.
  • the corresponding sound effect parameter can be loaded by identifying the user's identity information, thereby automatically switching different sound effects for different users, improving operation convenience, and effectively improving the user experience. And auditory effects.
  • the sound effect parameter can be further optimized by considering at least one of the location information of the user terminal, the system time, the scene mode, and the ambient volume value, so that the sound effect parameter is more suitable for the user, more humanized, and the sound effect is better.
  • FIG. 3 is a schematic structural diagram of a user terminal according to an embodiment of the present invention, which may be used to perform a sound effect adjustment method disclosed in an embodiment of the present invention.
  • the user terminal may include:
  • the detecting unit 301 is configured to detect whether the target application in the user terminal has a user login.
  • the target application in the user terminal may be an application that is provided in the user terminal, or may be a third-party application that is downloaded and installed, which is not limited in the embodiment of the present invention.
  • the detecting unit 301 detects whether the audio player in the user terminal has a user login.
  • the detecting unit 301 can also detect whether there is a user login in the user terminal, that is, before using the user terminal, the user terminal can be logged in first.
  • the first obtaining unit 302 is configured to acquire the target identity information of the target when the detecting unit 301 detects that the target application has a user login.
  • the user before the user logs in, the user can apply the target in the user terminal.
  • Registration on the station may be performed by means of a user name and a password, wherein the user name may be a nickname or a mailbox input by the user, and the password may include, but is not limited to, a text string password, a gesture password, and a biometric information password.
  • the biometric information may include, but is not limited to, a combination of one or more of facial feature information, fingerprint information, iris information, retinal information, and voiceprint information.
  • you can also enter information such as the user's age, gender, and preferences.
  • the first obtaining unit 302 may obtain the target identity information of the user, where the target identity information may include, but is not limited to, at least one of user name, age, gender, and the like. .
  • the second obtaining unit 303 is configured to acquire, according to the target identity information, a target sound effect parameter corresponding to the target identity information.
  • the target sound effect parameters may include, but are not limited to, volume value, sound effect style (such as acoustic sound, rock, classical, pop, jazz, etc.), scene mode (such as concert hall mode, room mode, earphone mode, KTV mode). At least one of information such as stereo, single/dual channel, etc.
  • the volume value can be adjusted through the volume control, and the equalizer can be used to adjust the sound style, scene mode and other information.
  • the target sound effect parameters corresponding to different target identity information may be different. For example, different sound effect parameters may be set for different users. When the user registers, the corresponding sound effect parameters can be set.
  • the loading unit 304 is configured to load the target sound effect parameter for audio playback when the user terminal receives the audio output instruction.
  • the loading unit 304 may load the acquired target sound effect parameter, and replace the target sound effect parameter with the current sound effect parameter of the user terminal to perform audio playback.
  • the user terminal receives the audio output command, which may be triggered by the user (such as the user clicking to play an audio file), or may be triggered by the user terminal itself (such as when the alarm sounds, the incoming call, etc.).
  • the loading unit 304 is further configured to: when the detecting unit 301 detects that the target application has no user login, and when the user terminal receives the audio output instruction, load the default sound effect parameter in the user terminal. Perform audio playback.
  • the default sound effect parameter may be the original sound effect parameter of the user terminal, or may be the sound effect parameter of the user terminal before the previous adjustment.
  • the embodiment of the present invention may perform functional unit division on a user terminal according to the foregoing method example.
  • each functional unit may be divided according to each function, or two or more function sets may be used.
  • a processing unit In a processing unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the foregoing detecting unit 301, the first obtaining unit 302, the second obtaining unit 303, and the loading unit 304 may be integrated into a central processing unit (CPU).
  • CPU central processing unit
  • the division of the unit in the embodiment of the present invention is schematic, and is only a logical function division, and the actual implementation may have another division manner.
  • FIG. 4 is a schematic structural diagram of another user terminal according to an embodiment of the present disclosure, which may be used to perform a sound effect adjustment method disclosed in an embodiment of the present invention.
  • the user terminal shown in FIG. 4 is further optimized based on the user terminal shown in FIG. 3.
  • the user terminal shown in FIG. 4 may further include:
  • the setting unit 305 is configured to preset and store a sound effect parameter list for different users before the detecting unit 301 detects whether the target application in the user terminal has a user login, and the sound effect parameter list includes the identity information and the sound effect parameter of different users. Mapping relations.
  • the specific implementation manner that the second obtaining unit 303 acquires the target sound effect parameter corresponding to the target identity information according to the target identity information may be:
  • the second obtaining unit 303 acquires a target sound effect parameter corresponding to the target identity information from the sound effect parameter list according to the target identity information.
  • the user terminal shown in FIG. 4 may further include:
  • the third obtaining unit 306 is configured to acquire target data, where the target data may include, but is not limited to, location information of the user terminal, current system time of the user terminal, a current scene mode of the user terminal, and a volume value of the current environment of the user terminal. At least one of them.
  • the specific implementation manner that the second obtaining unit 303 obtains the target sound effect parameter corresponding to the target identity information from the sound effect parameter list according to the target identity information may be:
  • the second obtaining unit 303 acquires, according to the target identity information, a sound effect parameter corresponding to the target identity information and the target data from the sound effect parameter list as a target sound effect parameter.
  • the corresponding target identity information may have different sound effect parameters under different target data.
  • the sound effect parameter list may include a correspondence relationship between the identity information, the target data, and the sound effect parameter.
  • the second obtaining unit 303 obtains, according to the target identity information, the target identity information and the target data from the sound effect parameter list.
  • the specific implementation of the sound effect parameter as the target sound effect parameter can for:
  • the second obtaining unit 303 obtains, according to the target identity information, a sound effect parameter corresponding to the target identity information and the location information of the user terminal as the target sound effect parameter, wherein the same identity information corresponds to different location information.
  • the sound parameters can be different.
  • the sound effect parameter list may include a correspondence relationship between the identity information, the location information, and the sound effect parameter.
  • the second obtaining unit 303 obtains, according to the target identity information, the target identity information and the target data from the sound effect parameter list.
  • the specific implementation manner of the corresponding sound effect parameter as the target sound effect parameter may be:
  • the second obtaining unit 303 obtains, according to the target identity information, a sound effect parameter corresponding to the target identity information and the current scene mode of the user terminal as the target sound effect parameter, wherein the same identity information is in different context modes.
  • the corresponding sound effects parameters can be different.
  • the sound effect parameter list may include a correspondence relationship between the identity information, the scene mode, and the sound effect parameter.
  • the setting unit 305 and the third obtaining unit 306 can also be integrated into a central processing unit (CPU).
  • CPU central processing unit
  • FIG. 5 is a schematic structural diagram of another user terminal according to an embodiment of the present disclosure, which may be used for The sound effect adjustment method disclosed in the embodiment of the present invention is executed.
  • the user terminal shown in FIG. 5 is further optimized based on the user terminal shown in FIG. 4.
  • the second obtaining unit 303 of the user terminal shown in FIG. 5 may include:
  • the first determining subunit 3031 is configured to determine a preset time range to which the current system time of the user terminal belongs;
  • the first obtaining sub-unit 3032 is configured to obtain, according to the target identity information, a sound effect parameter corresponding to the target identity information and the preset time range as the target sound effect parameter, where the same identity information is at different times
  • the corresponding sound effects parameters in the range can be different.
  • the sound effect parameter list may include a correspondence relationship between the identity information, the time range, and the sound effect parameter.
  • FIG. 6 is a schematic structural diagram of another user terminal disclosed in the embodiment of the present invention. It can be used to perform the sound effect adjustment method disclosed in the embodiments of the present invention. Among them, shown in Figure 6.
  • the user terminal is further optimized based on the user terminal shown in FIG. Compared with the user terminal shown in FIG. 4, the second obtaining unit 303 of the user terminal shown in FIG. 6 may include:
  • the second determining sub-unit 3033 is configured to determine a preset volume range to which the volume value of the environment in which the user terminal is currently located belongs;
  • the second obtaining sub-unit 3034 is configured to obtain, according to the target identity information, a sound effect parameter corresponding to the target identity information and the preset volume range as the target sound effect parameter, where the same identity information is at different volume
  • the corresponding sound effects parameters in the range can be different.
  • the sound effect parameter list may include a correspondence relationship between the identity information, the volume range, and the sound effect parameter.
  • the corresponding sound effect parameter can be loaded by identifying the identity information of the user, so that different sound effects can be automatically switched for different users, thereby improving operation convenience. Effectively improve user experience and hearing.
  • the sound effect parameter can be further optimized by considering at least one of the location information of the user terminal, the system time, the scene mode, and the ambient volume value, so that the sound effect parameter is more suitable for the user, more humanized, and the sound effect is better.
  • FIG. 7 is a schematic structural diagram of another user terminal according to an embodiment of the present disclosure, which may be used to perform a sound effect adjustment method disclosed in an embodiment of the present invention.
  • the user terminal 700 can include at least one processor 701 and a memory 704.
  • the user terminal 700 further includes at least one output device 702 and at least one output device 703.
  • these components can be communicatively connected through one or more buses 705.
  • the structure of the user terminal shown in FIG. 7 does not constitute a limitation on the embodiment of the present invention. It may be a bus-shaped structure or a star-shaped structure, and may also include more than the illustration. Or fewer parts, or combine some parts, or different parts. among them:
  • the processor 701 is a control center of the user terminal, and connects various parts of the entire user terminal by using various interfaces and lines, by running or executing programs and/or modules stored in the memory 704, and calling the storage in Data within memory 704 to perform various functions of the user terminal and process data.
  • the processor 701 may be composed of an integrated circuit (IC), for example, may be composed of a single packaged IC, or may be connected by multiple identical functions or different
  • the functional package IC is composed.
  • the processor 701 may include only a central processing unit (CPU), or may be a CPU, a digital signal processor (DSP), or a graphics processing unit (GPU). And a combination of various control chips.
  • the CPU may be a single operation core, and may also include multiple operation cores.
  • the input device 702 may include a standard touch screen, a keyboard, and the like, and may also include a wired interface, a wireless interface, and the like, and may be used to implement interaction between the user and the user terminal 700.
  • the output device 703 may include a display screen, a speaker, and the like, and may also include a wired interface, a wireless interface, and the like.
  • the memory 704 can be used to store applications and modules, and the processor 701, the input device 702, and the output device 703 perform various function applications of the user terminal by calling an application program and a module stored in the memory 704.
  • Implement data processing The memory 704 mainly includes a program storage area and a data storage area, wherein the program storage area can store an operating system, an application required for at least one function, and the like; the data storage area can store data created according to usage of the user terminal, and the like.
  • the operating system may be an Android system, an iOS system, a Windows operating system, or the like.
  • the processor 701 calls an application stored in the memory 704 for performing the following operations:
  • the trigger output device 703 loads the target sound effect parameter for audio playback.
  • the processor 701 may also call an application stored in the memory 704 before detecting whether the target application in the user terminal 700 has a user login, and perform the following operations:
  • the sound effect parameter list is preset and stored in the memory 704, and the sound effect parameter list includes mapping relationship between the identity information of different users and the sound effect parameter;
  • the processor 701 acquires, according to the target identity information, a corresponding to the target identity information.
  • the specific implementation of the target sound effect parameter can be:
  • processor 701 can also invoke an application stored in the memory 704 and perform the following operations:
  • target data includes at least one of location information of the user terminal 700, a current system time of the user terminal 700, a current scene mode of the user terminal 700, and a volume value of an environment in which the user terminal 700 is currently located;
  • the processor 701 obtains the target sound effect parameter corresponding to the target identity information from the sound effect parameter list according to the target identity information, including:
  • the processor 701 acquires, according to the target identity information, the target identity information and the target data from the sound effect parameter list.
  • the specific implementation manner of the sound effect parameter as the target sound effect parameter may be:
  • the processor 701 obtains, according to the target identity information, the target identity information and the target data from the sound effect parameter list.
  • the specific implementation manner of the sound effect parameter as the target sound effect parameter may be:
  • the sound effect parameter corresponding to the target identity information and the preset time range is obtained as the target sound effect parameter, and the corresponding sound effect parameter of the same identity information may be different in different time ranges.
  • the processor 701 obtains the target body from the sound effect parameter list according to the target identity information.
  • the specific implementation manner of the sound effect parameter corresponding to the information and the target data as the target sound effect parameter may be:
  • the sound effect parameter corresponding to the target identity information and the current scene mode of the user terminal 700 as the target sound effect parameter, wherein the sound effect parameter of the same identity information in different context modes Can be different.
  • the processor 701 acquires the target identity information and the target from the sound effect parameter list according to the target identity information.
  • the specific implementation manner of the sound effect parameter corresponding to the data as the target sound effect parameter may be:
  • the sound effect parameter corresponding to the target identity information and the preset volume range is obtained as the target sound effect parameter, wherein the sound effect parameters of the same identity information in different volume ranges may be different.
  • processor 701 can also invoke an application stored in the memory 704 and perform the following operations:
  • the trigger output device 703 loads the default sound effect parameter for audio playback.
  • the user terminal introduced in the embodiment of the present invention may implement some or all of the processes in the embodiment of the sound effect adjustment method introduced by the present invention in conjunction with FIG. 1 or FIG.
  • the corresponding sound effect parameter can be loaded by identifying the identity information of the user, so that different sound effects can be automatically switched for different users, the operation convenience is improved, and the user is effectively improved.
  • the sound effect parameter can be further optimized by considering at least one of the location information of the user terminal, the system time, the scene mode, and the ambient volume value, so that the sound effect parameter is more suitable for the user, more humanized, and the sound effect is better.
  • FIG. 8 is a schematic structural diagram of another user terminal according to an embodiment of the present disclosure, which may be used to perform a sound effect adjustment method disclosed in an embodiment of the present invention.
  • the user terminal may include various terminals such as a mobile phone, a tablet computer, a palmtop computer, a PDA, a MID, a multimedia player, a smart wearable device, and an in-vehicle terminal, and the user terminal is used as a mobile phone as an example:
  • FIG. 8 is a schematic diagram showing a partial structure of a mobile phone related to a user terminal disclosed in an embodiment of the present invention.
  • the mobile phone includes: a radio frequency (RF) circuit 810 , a memory 820 , an input unit 830 , a display unit 840 , a sensor 850 , an audio circuit 860 , a wireless fidelity (WiFi) module 870 , and a processor 880 .
  • RF radio frequency
  • the RF circuit 810 can be used for receiving and transmitting signals during the transmission or reception of information or during a call. Specifically, after receiving the downlink information of the base station, it is processed by the processor 880. In addition, the uplink data is designed to be sent to the base station. Generally, RF circuit 810 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, RF circuitry 810 can also communicate with the network and other devices via wireless communication. The above wireless communication may use any communication standard or protocol, including but not limited to Global System of Mobile Communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (Code Division). Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), E-mail, Short Messaging Service (SMS), and the like.
  • GSM Global System of Mobile Communication
  • GPRS General Packet Radio Service
  • the memory 820 can be used to store software programs and modules, and the processor 880 executes various functional applications and data processing of the mobile phone by running software programs and modules stored in the memory 820.
  • the memory 820 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may be stored according to Data created by the use of the mobile phone (such as audio data, phone book, etc.).
  • memory 820 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
  • the input unit 830 can be configured to receive input numeric or character information, and generate a user with the mobile phone Set and key signal input related to function control.
  • the input unit 830 may include a touch panel 831 and other input devices 832.
  • the touch panel 831 also referred to as a touch screen, can collect touch operations on or near the user (such as the user using a finger, a stylus, or the like on the touch panel 831 or near the touch panel 831. Operation), and drive the corresponding connecting device according to a preset program.
  • the touch panel 831 can include two parts: a touch detection device and a touch controller.
  • the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information.
  • the processor 880 is provided and can receive commands from the processor 880 and execute them.
  • the touch panel 831 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • the input unit 830 may also include other input devices 832.
  • other input devices 832 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, and the like.
  • the display unit 840 can be used to display information input by the user or information provided to the user as well as various menus of the mobile phone.
  • the display unit 840 can include a display panel 841.
  • the display panel 841 can be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like.
  • the touch panel 831 can cover the display panel 841. When the touch panel 831 detects a touch operation thereon or nearby, the touch panel 831 transmits to the processor 880 to determine the type of the touch event, and then the processor 880 according to the touch event. The type provides a corresponding visual output on display panel 841.
  • the touch panel 831 and the display panel 841 are two independent components to implement the input and input functions of the mobile phone, in some embodiments, the touch panel 831 can be integrated with the display panel 841. Realize the input and output functions of the phone.
  • the handset can also include at least one type of sensor 850, such as a light sensor, motion sensor, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 841 according to the brightness of the ambient light, and the proximity sensor may close the display panel 841 and/or when the mobile phone moves to the ear. Or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in all directions (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity. It can be used to identify the gesture of the mobile phone (such as horizontal and vertical screen switching, related Game, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping), etc.
  • Other sensors such as gyro, barometer, hygrometer, thermometer, infrared sensor, etc., are not described here.
  • An audio circuit 860, a speaker 861, and a microphone 862 can provide an audio interface between the user and the handset.
  • the audio circuit 860 can transmit the converted electrical data of the received audio data to the speaker 861 for conversion to the sound signal output by the speaker 861; on the other hand, the microphone 862 converts the collected sound signal into an electrical signal by the audio circuit 860. After receiving, it is converted into audio data, and then processed by the audio data output processor 880, sent to the other mobile phone via the RF circuit 810, or outputted to the memory 820 for further processing.
  • WiFi is a short-range wireless transmission technology
  • the mobile phone can help users to send and receive emails, browse web pages, and access streaming media through the WiFi module 870, which provides users with wireless broadband Internet access.
  • FIG. 8 shows the WiFi module 870, it can be understood that it does not belong to the essential configuration of the mobile phone, and can be omitted as needed within the scope of not changing the essence of the invention.
  • the processor 880 is the control center of the handset, and connects various portions of the entire handset using various interfaces and lines, by executing or executing software programs and/or modules stored in the memory 820, and invoking data stored in the memory 820, executing The phone's various functions and processing data, so that the overall monitoring of the phone.
  • the processor 880 may include one or more processing units; preferably, the processor 880 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, and the like.
  • the modem processor primarily handles wireless communications. It will be appreciated that the above described modem processor may also not be integrated into the processor 880.
  • the handset also includes a power supply 890 (such as a battery) that supplies power to the various components.
  • a power supply 890 can be logically coupled to the processor 880 through a power management system to manage functions such as charging, discharging, and power management through the power management system.
  • the mobile phone may further include a camera, a Bluetooth module, and the like, and details are not described herein.
  • the processor 880 included in the user terminal further has a function corresponding to the processor 701 of the foregoing embodiment, and details are not described herein again.
  • the embodiment of the present invention further provides a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, the computer program causing the computer to perform some or all of the steps of any of the methods described in the foregoing method embodiments.
  • the computer includes a user terminal.
  • Embodiments of the present invention also provide a computer program product, the computer program product including storage A non-transitory computer readable storage medium for a computer program operative to cause a computer to perform some or all of the steps of any of the methods described in the above method embodiments.
  • the computer program product can be a software installation package, the computer including a user terminal.
  • Modules or sub-modules in all embodiments of the present invention may be implemented by a general-purpose integrated circuit, such as a CPU, or by an ASIC (Application Specific Integrated Circuit).
  • a general-purpose integrated circuit such as a CPU
  • ASIC Application Specific Integrated Circuit
  • units or subunits in the user terminal may be combined, divided, and deleted according to actual needs.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Telephone Function (AREA)
  • User Interface Of Digital Computer (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

L'invention concerne un procédé d'ajustement d'effet sonore et un terminal utilisateur, le procédé consistant : à détecter si une application cible sur un terminal utilisateur a oui ou non fait l'objet d'un enregistrement de la part d'un utilisateur (101) ; lorsqu'il est détecté que l'application cible a fait l'objet d'un enregistrement de la part d'un utilisateur, à obtenir des informations d'identité cible de l'utilisateur (102) ; à obtenir, en fonction des informations d'identité cible, un paramètre d'effet sonore cible correspondant aux informations d'identité cible (103) ; et à charger le paramètre d'effet sonore cible en vue d'une lecture audio lorsque le terminal utilisateur reçoit une instruction de sortie audio (104). Le procédé d'ajustement d'effet sonore et le terminal utilisateur peuvent automatiquement commuter entre différents effets sonores pour différents utilisateurs, ce qui permet d'améliorer la commodité des opérations et d'améliorer l'expérience de l'utilisateur.
PCT/CN2017/088671 2016-06-16 2017-06-16 Procédé d'ajustement d'effet sonore et terminal d'utilisateur WO2017215649A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610438952.0 2016-06-16
CN201610438952.0A CN105955700A (zh) 2016-06-16 2016-06-16 一种音效调节方法及用户终端

Publications (1)

Publication Number Publication Date
WO2017215649A1 true WO2017215649A1 (fr) 2017-12-21

Family

ID=56906535

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/088671 WO2017215649A1 (fr) 2016-06-16 2017-06-16 Procédé d'ajustement d'effet sonore et terminal d'utilisateur

Country Status (2)

Country Link
CN (1) CN105955700A (fr)
WO (1) WO2017215649A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111736477A (zh) * 2020-06-05 2020-10-02 海尔优家智能科技(北京)有限公司 环境参数调整方法、装置、存储介质、电子装置
CN112336370A (zh) * 2019-08-09 2021-02-09 深圳市理邦精密仪器股份有限公司 胎心音处理方法、装置、医疗设备及计算机存储介质
CN113593279A (zh) * 2021-07-22 2021-11-02 海信集团控股股份有限公司 车辆及其交互参数调整方法、移动终端

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105955700A (zh) * 2016-06-16 2016-09-21 广东欧珀移动通信有限公司 一种音效调节方法及用户终端
WO2018176466A1 (fr) * 2017-04-01 2018-10-04 深圳市智晟达科技有限公司 Procédé pour partager des préférences de réglage vidéo selon un compte d'ouverture de session, et télévision numérique
CN107539219A (zh) * 2017-07-04 2018-01-05 芜湖市振华戎科智能科技有限公司 人车关联交互装置
WO2019033436A1 (fr) * 2017-08-18 2019-02-21 广东欧珀移动通信有限公司 Procédé de réglage du volume, dispositif, support de stockage et terminal mobile
CN108347672B (zh) * 2018-02-09 2021-01-22 广州酷狗计算机科技有限公司 播放音频的方法、装置及存储介质
CN109119088A (zh) * 2018-08-29 2019-01-01 歌尔科技有限公司 一种音频信号的调节方法、装置、设备及计算机存储介质
CN109271128A (zh) * 2018-09-04 2019-01-25 Oppo广东移动通信有限公司 音效设置方法、装置、电子设备及存储介质
CN109410900B (zh) * 2018-09-04 2022-06-21 Oppo广东移动通信有限公司 音效处理方法、装置以及电子设备
CN109243413B (zh) * 2018-09-25 2023-02-10 Oppo广东移动通信有限公司 3d音效处理方法及相关产品
CN110049404B (zh) * 2019-04-23 2021-08-06 深圳慧安康科技有限公司 智能装置及其音量控制方法
CN111930990B (zh) * 2019-05-13 2024-05-10 阿里巴巴集团控股有限公司 确定电子书语音播放设置的方法、系统及终端设备
CN112740169A (zh) * 2019-12-23 2021-04-30 深圳市易优斯科技有限公司 均衡器设置方法、装置、设备及计算机可读存储介质
CN111343497A (zh) * 2020-02-27 2020-06-26 深圳创维-Rgb电子有限公司 播放设备的音效调整方法、播放设备以及存储介质
CN112188342A (zh) * 2020-09-25 2021-01-05 江苏紫米电子技术有限公司 均衡参数确定方法、装置、电子设备和存储介质
CN112717395B (zh) * 2021-01-28 2023-03-03 腾讯科技(深圳)有限公司 音频绑定方法、装置、设备以及存储介质
CN113127678A (zh) * 2021-04-23 2021-07-16 广州酷狗计算机科技有限公司 音频处理方法、装置、终端及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104517621A (zh) * 2014-12-12 2015-04-15 小米科技有限责任公司 设备配置方法和装置
CN104918109A (zh) * 2015-06-08 2015-09-16 小米科技有限责任公司 智能播放的方法及装置
CN104966522A (zh) * 2015-06-30 2015-10-07 广州酷狗计算机科技有限公司 音效调节方法、云端服务器、音响设备及系统
CN105025415A (zh) * 2015-06-08 2015-11-04 广东欧珀移动通信有限公司 一种音效切换方法及用户终端
US20160103653A1 (en) * 2014-10-14 2016-04-14 Samsung Electronics Co., Ltd. Electronic device, method of controlling volume of the electronic device, and method of controlling the electronic device
CN105955700A (zh) * 2016-06-16 2016-09-21 广东欧珀移动通信有限公司 一种音效调节方法及用户终端

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100421152C (zh) * 2004-07-30 2008-09-24 英业达股份有限公司 声音控制系统以及方法
KR101542379B1 (ko) * 2008-08-28 2015-08-06 엘지전자 주식회사 영상 표시 장치 및 사용자별 시청 환경 설정 방법
CN103327173B (zh) * 2013-05-17 2015-10-28 广东欧珀移动通信有限公司 一种移动终端的声音控制方法及装置
CN104010147B (zh) * 2014-04-29 2017-11-07 京东方科技集团股份有限公司 自动调节音频播放系统音量的方法和音频播放装置
CN104112459B (zh) * 2014-06-25 2017-02-15 小米科技有限责任公司 播放音频数据的方法和装置
CN104469670B (zh) * 2014-10-22 2016-12-21 广东小天才科技有限公司 一种基于移动终端位置切换音效模式的方法及移动终端
CN105142021B (zh) * 2015-08-11 2019-02-22 京东方科技集团股份有限公司 显示控制系统、显示控制方法和显示装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160103653A1 (en) * 2014-10-14 2016-04-14 Samsung Electronics Co., Ltd. Electronic device, method of controlling volume of the electronic device, and method of controlling the electronic device
CN104517621A (zh) * 2014-12-12 2015-04-15 小米科技有限责任公司 设备配置方法和装置
CN104918109A (zh) * 2015-06-08 2015-09-16 小米科技有限责任公司 智能播放的方法及装置
CN105025415A (zh) * 2015-06-08 2015-11-04 广东欧珀移动通信有限公司 一种音效切换方法及用户终端
CN104966522A (zh) * 2015-06-30 2015-10-07 广州酷狗计算机科技有限公司 音效调节方法、云端服务器、音响设备及系统
CN105955700A (zh) * 2016-06-16 2016-09-21 广东欧珀移动通信有限公司 一种音效调节方法及用户终端

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112336370A (zh) * 2019-08-09 2021-02-09 深圳市理邦精密仪器股份有限公司 胎心音处理方法、装置、医疗设备及计算机存储介质
CN112336370B (zh) * 2019-08-09 2022-07-05 深圳市理邦精密仪器股份有限公司 胎心音处理方法、装置、医疗设备及计算机存储介质
CN111736477A (zh) * 2020-06-05 2020-10-02 海尔优家智能科技(北京)有限公司 环境参数调整方法、装置、存储介质、电子装置
CN113593279A (zh) * 2021-07-22 2021-11-02 海信集团控股股份有限公司 车辆及其交互参数调整方法、移动终端

Also Published As

Publication number Publication date
CN105955700A (zh) 2016-09-21

Similar Documents

Publication Publication Date Title
WO2017215649A1 (fr) Procédé d'ajustement d'effet sonore et terminal d'utilisateur
US10649720B2 (en) Sound effect configuration method and system and related device
US11355157B2 (en) Special effect synchronization method and apparatus, and mobile terminal
CN108668009B (zh) 输入操作控制方法、装置、终端、耳机及可读存储介质
US10678942B2 (en) Information processing method and related products
WO2017215660A1 (fr) Procédé de commande d'effet sonore de scène et dispositif électronique
WO2017215652A1 (fr) Procédé d'ajustement de paramètre d'effet sonore et terminal mobile
CN108781236B (zh) 音频播放方法及电子设备
WO2017181365A1 (fr) Procédé de commande de canal d'écouteur, appareil associé et système
CN106921791B (zh) 一种多媒体文件的存储和查看方法、装置及移动终端
WO2017215635A1 (fr) Procédé de traitement d'effet sonore et terminal mobile
WO2017215661A1 (fr) Procédé de contrôle d'effet sonore basé sur un scénario, et dispositif électronique
CN106506437B (zh) 一种音频数据处理方法,及设备
WO2018103443A1 (fr) Procédé de localisation réseau et dispositif terminal
WO2017215507A1 (fr) Procédé de traitement d'effet sonore, et terminal mobile
CN112997470B (zh) 音频输出控制方法和装置、计算机可读存储介质、电子设备
WO2017215653A1 (fr) Terminal utilisateur et procédé de traitement de volume
WO2017215511A1 (fr) Procédé de commande d'effet sonore de scène et produits associés
CN107317918B (zh) 参数设置方法及相关产品
US9965733B2 (en) Method, apparatus, and communication system for updating user data based on a completion status of a combination of business task and conversation task
AU2014405030A1 (en) Media file processing method and terminal
WO2020011211A1 (fr) Terminal mobile et procédé et dispositif d'ouverture automatique de session dans une plate-forme d'application
CN112805988B (zh) 通话控制方法和装置、计算机可读存储介质、电子设备
WO2015078349A1 (fr) Procédé et appareil de commutation d'un état de réception du son d'un microphone
CN112997471A (zh) 音频通路切换方法和装置、可读存储介质、电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17812760

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17812760

Country of ref document: EP

Kind code of ref document: A1