WO2017215660A1 - 一种场景音效的控制方法、及电子设备 - Google Patents

一种场景音效的控制方法、及电子设备 Download PDF

Info

Publication number
WO2017215660A1
WO2017215660A1 PCT/CN2017/088788 CN2017088788W WO2017215660A1 WO 2017215660 A1 WO2017215660 A1 WO 2017215660A1 CN 2017088788 W CN2017088788 W CN 2017088788W WO 2017215660 A1 WO2017215660 A1 WO 2017215660A1
Authority
WO
WIPO (PCT)
Prior art keywords
electronic device
application
sound effect
audio track
scene sound
Prior art date
Application number
PCT/CN2017/088788
Other languages
English (en)
French (fr)
Inventor
李亚军
甘高亭
杨海
涂广
Original Assignee
广东欧珀移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广东欧珀移动通信有限公司 filed Critical 广东欧珀移动通信有限公司
Priority to EP17812771.8A priority Critical patent/EP3441874B1/en
Priority to US16/094,496 priority patent/US10891102B2/en
Publication of WO2017215660A1 publication Critical patent/WO2017215660A1/zh
Priority to US16/430,605 priority patent/US10817255B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/65Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/162Interface to dedicated audio devices, e.g. audio drivers, interface to CODECs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback

Definitions

  • the present invention relates to the field of computer technologies, and in particular, to a method for controlling scene sound effects and an electronic device.
  • Sound effect refers to the effect produced by sound, which refers to the added noise or sound to enhance the realism, atmosphere or drama of a scene.
  • the added noise or sound may include tone and effect sounds.
  • tone and effect sounds For example: digital sound effects, environmental sound effects, MP3 sound effects (normal sound effects, professional sound effects).
  • sound effects are artificially created or enhanced sounds that enhance the sound processing of art or other content in movies, video games, music, or other media.
  • Scene sound is a more specific application scenario of sound effects, which relates to the sound effects related to the current application scene.
  • An embodiment of the present invention provides a method for controlling a scene sound effect, including:
  • the electronic device monitors a sound track of the electronic device through the service with the monitoring function, and determines whether the audio track of the electronic device has audio output; the audio track of the electronic device and the electronic device There is a mapping relationship between applications;
  • the electronic device determines that the audio track of the electronic device has an audio output, determining an application that has a mapping relationship with the audio track of the electronic device according to the mapping relationship;
  • the electronic device acquires a scene sound effect corresponding to the application, and sets a current sound effect of the electronic device as the scene sound effect.
  • the second embodiment of the present invention further provides an electronic device, including:
  • a monitoring control unit configured to start a service with a monitoring function after the electronic device is turned on
  • a monitoring unit configured to monitor, by using the service with the monitoring function, the audio track of the electronic device to determine whether the audio track of the electronic device has audio output; the audio track of the electronic device and the There is a mapping relationship between applications within the electronic device;
  • An application determining unit configured to determine, according to the mapping relationship, an application that has a mapping relationship with a sound track of the electronic device, if the monitoring unit determines that the audio track of the electronic device has an audio output;
  • the sound effect setting unit is configured to acquire a scene sound effect corresponding to the application, and set the current sound effect of the electronic device to the scene sound effect.
  • the embodiment of the present invention further provides an electronic device, including: a processor, a memory, and an audio output device for outputting a scene sound effect; the processor is configured to perform the method according to any one of the embodiments of the present invention. .
  • the embodiment of the present invention further provides a computer readable storage medium storing a computer program for electronic data exchange, wherein the computer program is executed to implement any of the embodiments provided by the embodiments of the present invention. The method described.
  • the embodiment of the present invention further provides a program product, where the computer program is executed to implement the method provided by the embodiment of the present invention.
  • the embodiments of the present invention have the following advantages: accurately correct the application scenario of the electronic device by monitoring the audio track, mapping the audio track to the application, and correspondingly the relationship between the scene sound effect and the application. And accurately determine the desired scene sound.
  • the process does not require people to participate in the setting of the scene sound effect, so the operation is simplified and the use efficiency of the electronic device is improved under the premise of ensuring the accuracy of the sound of the higher scene.
  • FIG. 1 is a schematic flowchart of a method according to an embodiment of the present invention.
  • FIG. 2 is a schematic flowchart of a method according to an embodiment of the present invention.
  • FIG. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of a terminal device according to an embodiment of the present invention.
  • the sound effect is different for the sound processing technology of different scenes, the effect achieved will be different, so the recognition rate of the application scene can be improved and the corresponding sound effect mode can be set, which can greatly improve the user's hearing experience.
  • the current scene sound effects may include: the sound effect of the music playing application scene, the sound effect of the video playing scene, and the like; the control of the scene sound effect is manually set by a person, and the specific process is as follows:
  • the electronic device acquires the scene sound effect manually input and/or selected by the human interface through the interactive interface, and then the electronic device sets the current scene sound effect to the scene sound effect manually input and/or selected by the above-mentioned person.
  • An embodiment of the present invention provides a method for controlling a scene sound effect, as shown in FIG. 1 , including:
  • the service of the listening function may be implemented by a hardware entity, or may be based on a software function of the hardware entity, which is not limited by the embodiment of the present invention.
  • the electronic device monitors the audio track of the electronic device by using the service with the monitoring function to determine whether the audio track of the electronic device has an audio output; and the audio track of the electronic device has an application between the electronic device and the electronic device. Mapping relations;
  • the parallel "tracks" seen in the sequencer software are the tracks, and one track corresponds to one part of the music. Therefore, each track defines the properties of the track, such as the sound of the track, the library of the sound, the number of channels, the input/output port, the volume, and so on.
  • the mapping relationship between the audio track and the application may be a one-to-one correspondence.
  • the application here should be understood as an application in a broad sense, such as: application software, client of the application, name of the application, or type of application. This depends on what content the final scene sound needs to correspond to. For example, the scene sound corresponds to a type of application, so it should be understood as the application type. If the scene sound accurately corresponds to an application software, it should be understood as the application software. .
  • mapping relationship determines an application that is mapped to the audio track of the electronic device
  • the embodiment of the present invention is not limited.
  • the service implementation of the software if the service implementation of the software is used, the service with the monitoring function can be started at the application layer to monitor the audio track.
  • the electronic device acquires a scene sound effect corresponding to the application, and sets a current sound effect of the electronic device to the scene sound effect.
  • the audio output device of the electronic device can be used, and the audio output device can include: a sound card and an audio output interface; or, a sound card and a speaker.
  • the audio output connector can be connected to an external speaker or a device such as a headset.
  • the monitoring relationship between the audio track and the application, and the corresponding relationship between the sound effect of the scene and the application are accurately determined, and the application scenario of the electronic device is accurately determined, and the required scene sound effect is accurately determined.
  • the process does not require people to participate in the setting of the scene sound effect, so the operation is simplified and the use efficiency of the electronic device is improved under the premise of ensuring the accuracy of the sound of the higher scene.
  • the mapping relationship between the audio track of the electronic device and the application in the electronic device includes:
  • the audio track of the electronic device has a mapping relationship with the client of the application in the electronic device.
  • the mapping between the audio track of the electronic device and the client of the application in the electronic device is as follows: include:
  • the audio track of the electronic device has a mapping relationship with an application type to which the client of the application in the electronic device belongs.
  • the application software is substantially classified, and different types of application software require different sound effects, such as game software and music software, real-time strategy games and casual games, and the like.
  • the embodiment of the present invention further provides a solution for determining the audio track of the electronic device according to the above mapping relationship.
  • the above method further includes:
  • the above electronic device establishes a communication connection with a server located on the network side, and connects through the above communication Sending a query request to the server on the network side, carrying the name of the client or carrying the name of the application in the query request; storing the classification information of the client in the server on the network side or classifying the client according to the name of the application Classified information;
  • the foregoing application for determining a mapping relationship with the audio track of the electronic device according to the foregoing mapping relationship includes:
  • the electronic device receives the application type returned by the server, and the application type is determined by the server on the network side according to the classification information of the client or the classification information classified by the client according to the name of the application;
  • the electronic device acquires a scene sound effect corresponding to the application type.
  • the application type maps the scene sound effect, and the mapping relationship exists on the server side for convenient maintenance;
  • the server may be a cloud server, and the mapping relationship may be maintained or updated by the operator, or may be personalized by the user. set.
  • This embodiment of the invention is not limited to this.
  • the above query request can be implemented by a socket.
  • the two programs on the network exchange data through a two-way communication connection. One end of the two-way communication connection is called a socket.
  • the embodiment of the present invention provides the following solution according to the requirements of the user's individualization: the foregoing method further includes:
  • the electronic device updates the classification information of the client stored in the server on the network side or the classification information classified to the client by the name of the application through the communication connection.
  • the type of the application includes: at least one of a music player, a video player, and a game application; and a correspondence between a type of the application and a scene sound effect;
  • the foregoing electronic device acquires the scene sound effect corresponding to the application, and includes:
  • the electronic device determines the scene sound effect corresponding to the application type to which the application belongs according to the correspondence between the type of the application and the scene sound effect.
  • the embodiment of the present invention further provides another more specific method flow description, including:
  • the XML file can perform related maintenance work such as: modification, update, download, upload, and the like;
  • the application layer on the mobile phone side listens to the creation and release of an audio track through a service, and obtains the audio output of the audiotrack;
  • Each audiotrack is bound to a client (client or client), which corresponds to the server, provides a local service for the client, and belongs to the application software. Compared with the application in the XML file we maintain, we know which application is currently playing audio, so as to achieve the purpose of identifying the scene;
  • An embodiment of the present invention further provides an electronic device, as shown in FIG. 3, including:
  • the monitoring control unit 301 is configured to start a service with a monitoring function after the electronic device is turned on;
  • the monitoring unit 302 is configured to monitor the audio track of the electronic device by using the service with the monitoring function to determine whether the audio track of the electronic device has audio output; between the audio track of the electronic device and the application in the electronic device Have a mapping relationship;
  • the application determining unit 303 is configured to: if the listening unit 302 determines that the audio track of the electronic device has audio output, determine an application that has a mapping relationship with the audio track of the electronic device according to the mapping relationship;
  • the sound effect setting unit 304 is configured to acquire a scene sound effect corresponding to the application, and set the current sound effect of the electronic device to the scene sound effect.
  • the service of the listening function may be implemented by a hardware entity, or may be based on a software function of the hardware entity, which is not limited by the embodiment of the present invention.
  • the parallel "tracks" seen in the sequencer software are the tracks, and one track corresponds to one part of the music. Therefore, each track defines the properties of the track, such as the sound of the track, the library of the sound, the number of channels, the input/output port, the volume, and so on.
  • the mapping relationship between audio tracks and applications can be a one-to-one correspondence.
  • the application here should be understood as a broad application, such as: application software, should The client used, the name of the application, or the type of application. This depends on what content the final scene sound needs to correspond to. For example, the scene sound corresponds to a type of application, so it should be understood as the application type. If the scene sound accurately corresponds to an application software, it should be understood as the application software. .
  • the embodiment of the present invention is not limited.
  • the service implementation of the software if the service implementation of the software is used, the service with the monitoring function can be started at the application layer to monitor the audio track.
  • the audio output device of the electronic device can be used, and the audio output device can include: a sound card and an audio output interface; or, a sound card and a speaker.
  • the audio output connector can be connected to an external speaker or a device such as a headset.
  • the monitoring relationship between the audio track and the application, and the corresponding relationship between the sound effect of the scene and the application are accurately determined, and the application scenario of the electronic device is accurately determined, and the required scene sound effect is accurately determined.
  • the process does not require people to participate in the setting of the scene sound effect, so the operation is simplified and the use efficiency of the electronic device is improved under the premise of ensuring the accuracy of the sound of the higher scene.
  • the mapping relationship between the audio track of the electronic device and the application in the electronic device includes:
  • the audio track of the electronic device has a mapping relationship with the client of the application in the electronic device.
  • the mapping between the audio track of the electronic device and the client of the application in the electronic device is as follows: include:
  • the audio track of the electronic device has a mapping relationship with an application type to which the client of the application in the electronic device belongs.
  • the application software is substantially classified, and different types of application software require different sound effects, such as game software and music software, real-time strategy games and casual games, and the like.
  • the embodiment of the present invention further provides the following solution: as shown in FIG. 4, the electronic device further includes:
  • connection establishing unit 401 configured to be determined by the application determining unit 303 according to the mapping relationship Before the application having the mapping relationship with the audio track of the electronic device, establishing a communication connection with the server located on the network side;
  • the request sending unit 402 is configured to send, by using the foregoing communication connection, a query request to the server on the network side, where the query request carries the client name or carries the name of the application; and the server side server stores the classification of the client. Information or classification information classified by the client according to the name of the application;
  • the application determining unit 303 is specifically configured to receive an application type returned by the server, where the application type is determined by the server on the network side according to the classification information of the client or the classification information classified by the client according to the name of the application;
  • the sound effect setting unit 304 is configured to acquire a scene sound effect corresponding to the application type, and set the current sound effect of the electronic device to the scene sound effect.
  • the application type maps the scene sound effect, and the mapping relationship exists on the server side for convenient maintenance;
  • the server may be a cloud server, and the mapping relationship may be maintained or updated by the operator, or may be personalized by the user. set.
  • This embodiment of the invention is not limited to this.
  • the above query request can be implemented by a socket.
  • the two programs on the network exchange data through a two-way communication connection. One end of the two-way communication connection is called a socket.
  • the electronic device further includes:
  • the information updating unit 501 is configured to update, by using the communication connection, the classification information of the client stored in the server on the network side or the classification information classified by the client according to the name of the application.
  • the type of the application includes: at least one of a music player, a video player, and a game application; and a correspondence between a type of the application and a scene sound effect;
  • the sound effect setting unit 304 is configured to determine a scene sound effect corresponding to the application type to which the application belongs according to the correspondence between the type of the application and the scene sound effect, and set the current sound effect of the electronic device to the scene sound effect.
  • the embodiment of the present invention further provides another electronic device, as shown in FIG. 6, comprising: a processor 601, a memory 602 and an audio output device 603 for outputting scene sound effects; wherein the storage device 602 can be used to provide a buffer required by the processor 601 to perform data processing, and can further provide a storage space of audio data of the scene sound effect;
  • the audio data may be from the network side, and the memory 602 local to the electronic device may provide the cache space after downloading;
  • the processor 601 is configured to start a service with a monitoring function after the electronic device is turned on; and monitor the audio track of the electronic device by using the service with the monitoring function to determine whether the audio track of the electronic device has a track.
  • the audio output has a mapping relationship between the audio track of the electronic device and the application in the electronic device; if it is determined that the audio track of the electronic device has an audio output, determining a mapping relationship with the audio track of the electronic device according to the mapping relationship
  • the application obtains the scene sound effect corresponding to the application, and sets the current sound effect of the electronic device to the scene sound effect.
  • the service of the listening function may be implemented by a hardware entity, or may be based on a software function of the hardware entity, which is not limited by the embodiment of the present invention.
  • the parallel "tracks" seen in the sequencer software are the tracks, and one track corresponds to one part of the music. Therefore, each track defines the properties of the track, such as the sound of the track, the library of the sound, the number of channels, the input/output port, the volume, and so on.
  • the mapping relationship between the audio track and the application may be a one-to-one correspondence.
  • the application here should be understood as an application in a broad sense, such as: application software, client of the application, name of the application, or type of application. This depends on what content the final scene sound needs to correspond to. For example, the scene sound corresponds to a type of application, so it should be understood as the application type. If the scene sound accurately corresponds to an application software, it should be understood as the application software. .
  • the embodiment of the present invention is not limited.
  • the service implementation of the software if the service implementation of the software is used, the service with the monitoring function can be started at the application layer to monitor the audio track.
  • the audio output device of the electronic device can be used, and the audio output device can include: a sound card and an audio output interface; or, a sound card and a speaker.
  • the audio output connector can be connected to an external speaker or a device such as a headset.
  • the monitoring relationship between the audio track and the application, and the corresponding relationship between the sound effect of the scene and the application are accurately determined, and the application scenario of the electronic device is accurately determined, and the required scene sound effect is accurately determined.
  • the process does not require people to participate in the setting of the scene sound effect, so the operation is simplified and the use efficiency of the electronic device is improved under the premise of ensuring the accuracy of the sound of the higher scene.
  • the mapping relationship between the audio track of the electronic device and the application in the electronic device includes:
  • the audio track of the electronic device has a mapping relationship with the client of the application in the electronic device.
  • the mapping between the audio track of the electronic device and the client of the application in the electronic device is as follows: include:
  • the audio track of the electronic device has a mapping relationship with an application type to which the client of the application in the electronic device belongs.
  • the application software is substantially classified, and different types of application software require different sound effects, such as game software and music software, real-time strategy games and casual games, and the like.
  • the embodiment of the present invention further provides the following solution: the processor 601 is further configured to perform the foregoing mapping according to the foregoing Before determining an application that has a mapping relationship with the audio track of the electronic device, establishing a communication connection with the server located on the network side, and sending a query request to the server on the network side through the communication connection, and carrying the client name in the query request Or carrying the name of the application; storing, on the network side server, the classification information of the client or the classification information classified by the client according to the name of the application;
  • the processor 601 configured to determine, according to the mapping relationship, an application that has a mapping relationship with the audio track of the electronic device, includes:
  • the processor 601 configured to obtain the scene sound effect corresponding to the application, includes:
  • the application type maps the scene sound effect, and the mapping relationship exists on the server side for convenient maintenance;
  • the server may be a cloud server, and the mapping relationship may be maintained or updated by the operator, or may be personalized by the user. set.
  • This embodiment of the invention is not limited to this.
  • Check above The request can be implemented by a socket.
  • the two programs on the network exchange data through a two-way communication connection. One end of the two-way communication connection is called a socket.
  • the embodiment of the present invention provides the following solution: the processor 601 is further configured to: use the foregoing communication connection to update the classification information of the client stored in the server on the network side by using the foregoing communication connection, or The classification information of the application's name to the client.
  • the type of the application includes: at least one of a music player, a video player, and a game application; and a correspondence between a type of the application and a scene sound effect;
  • the processor 601 is configured to obtain a scene sound effect corresponding to the application, and includes:
  • the scene sound effect corresponding to the application type to which the application belongs is determined according to the correspondence between the type of the application and the scene sound effect.
  • the embodiment of the present invention further provides a terminal device.
  • a terminal device As shown in FIG. 7 , for the convenience of description, only parts related to the embodiment of the present invention are shown. If the specific technical details are not disclosed, please refer to the method part of the embodiment of the present invention.
  • the terminal device may be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), an in-vehicle computer, and the terminal device is used as a mobile phone as an example:
  • FIG. 7 is a block diagram showing a partial structure of a mobile phone related to a terminal device provided by an embodiment of the present invention.
  • the mobile phone includes: a radio frequency (RF) circuit 710, a memory 720, an input unit 730, a display unit 740, a sensor 750, an audio circuit 760, a wireless fidelity (WiFi) module 770, and a processor 780. And power supply 790 and other components.
  • RF radio frequency
  • the RF circuit 710 can be used for transmitting and receiving information or during a call, and receiving and transmitting the signal. Specifically, after receiving the downlink information of the base station, the processor 780 processes the data. In addition, the uplink data is designed to be sent to the base station.
  • RF circuit 710 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like.
  • LNA Low Noise Amplifier
  • RF Circuitry 710 can also communicate with the network and other devices via wireless communication. The above wireless communication may use any communication standard or protocol, including but not limited to Global System of Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (Code Division). Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), E-mail, Short Messaging Service (SMS), and the like.
  • GSM Global System of Mobile communication
  • the memory 720 can be used to store software programs and modules, and the processor 780 executes various functional applications and data processing of the mobile phone by running software programs and modules stored in the memory 720.
  • the memory 720 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may be stored according to Data created by the use of the mobile phone (such as audio data, phone book, etc.).
  • memory 720 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
  • the input unit 730 can be configured to receive input numeric or character information and to generate key signal inputs related to user settings and function controls of the handset.
  • the input unit 730 may include a touch panel 731 and other input devices 732.
  • the touch panel 731 also referred to as a touch screen, can collect touch operations on or near the user (such as the user using a finger, a stylus, or the like on the touch panel 731 or near the touch panel 731. Operation), and drive the corresponding connecting device according to a preset program.
  • the touch panel 731 can include two parts: a touch detection device and a touch controller.
  • the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information.
  • the processor 780 is provided and can receive commands from the processor 780 and execute them.
  • the touch panel 731 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • the input unit 730 may also include other input devices 732.
  • other input devices 732 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, and the like.
  • the display unit 740 can be used to display information input by the user or information provided to the user as well as a hand Various menus of the machine.
  • the display unit 740 can include a display panel 741.
  • the display panel 741 can be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like.
  • the touch panel 731 can cover the display panel 741. When the touch panel 731 detects a touch operation on or near the touch panel 731, it transmits to the processor 780 to determine the type of the touch event, and then the processor 780 according to the touch event. The type provides a corresponding visual output on display panel 741.
  • touch panel 731 and the display panel 741 are used as two independent components to implement the input and input functions of the mobile phone in FIG. 7, in some embodiments, the touch panel 731 can be integrated with the display panel 741. Realize the input and output functions of the phone.
  • the handset may also include at least one type of sensor 750, such as a light sensor, motion sensor, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 741 according to the brightness of the ambient light, and the proximity sensor may close the display panel 741 and/or when the mobile phone moves to the ear. Or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in all directions (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity.
  • the mobile phone can be used to identify the gesture of the mobile phone (such as horizontal and vertical screen switching, related Game, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping), etc.; as for the mobile phone can also be configured with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors, no longer Narration.
  • the gesture of the mobile phone such as horizontal and vertical screen switching, related Game, magnetometer attitude calibration
  • vibration recognition related functions such as pedometer, tapping
  • the mobile phone can also be configured with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors, no longer Narration.
  • An audio circuit 760, a speaker 761, and a microphone 762 can provide an audio interface between the user and the handset.
  • the audio circuit 760 can transmit the converted electrical data of the received audio data to the speaker 761 for conversion to the sound signal output by the speaker 761; on the other hand, the microphone 762 converts the collected sound signal into an electrical signal by the audio circuit 760. After receiving, it is converted into audio data, and then processed by the audio data output processor 780, sent to, for example, another mobile phone via the RF circuit 710, or outputted to the memory 720 for further processing.
  • WiFi is a short-range wireless transmission technology
  • the mobile phone can help users to send and receive emails, browse web pages, and access streaming media through the WiFi module 770, which provides users with wireless broadband Internet access.
  • FIG. 7 shows the WiFi module 770, it can be understood that it does not belong to the essential configuration of the mobile phone, and may be omitted as needed within the scope of not changing the essence of the invention.
  • the processor 780 is the control center of the handset, which connects various parts of the entire handset using various interfaces and lines, by running or executing software programs and/or modules stored in the memory 720, and calling The data stored in the memory 720 performs various functions and processing data of the mobile phone, thereby integrally monitoring the mobile phone.
  • the processor 780 may include one or more processing units; preferably, the processor 780 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, and the like.
  • the modem processor primarily handles wireless communications. It will be appreciated that the above described modem processor may also not be integrated into the processor 780.
  • the handset also includes a power source 790 (such as a battery) that supplies power to the various components.
  • a power source 790 such as a battery
  • the power source can be logically coupled to the processor 780 through a power management system to manage functions such as charging, discharging, and power management through the power management system.
  • the mobile phone may further include a camera, a Bluetooth module, and the like, and details are not described herein again.
  • the processor 780 included in the terminal device further has a function corresponding to the processor 601 of the foregoing embodiment.
  • each unit included is only divided according to functional logic, but is not limited to the foregoing division, as long as the corresponding function can be implemented;
  • the names are also for convenience of distinction from each other and are not intended to limit the scope of protection of the present invention.
  • the storage medium may be a read only memory, a magnetic disk or an optical disk or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Stored Programmes (AREA)
  • Information Transfer Between Computers (AREA)
  • Telephone Function (AREA)

Abstract

本发明实施例公开了一种场景音效的控制方法、及电子设备,其中方法包括:电子设备在被开启后,启动具有监听功能的服务;所述电子设备通过所述具有监听功能的服务对所述电子设备的音轨进行监听,确定所述电子设备的音轨是否有音频输出;所述电子设备的音轨与所述电子设备内的应用之间具有映射关系;若所述电子设备确定所述电子设备的音轨有音频输出,则依据所述映射关系确定与所述电子设备的音轨有映射关系的应用;所述电子设备获取所述应用对应的场景音效,并将所述电子设备当前音效设置为所述场景音效。该过程不需要人参与场景音效的设置,因此在保证较高场景音效的准确率的前提下,简化操作,提高电子设备的使用效率。

Description

一种场景音效的控制方法、及电子设备 技术领域
本发明涉及计算机技术领域,特别涉及一种场景音效的控制方法、及电子设备。
背景技术
音效就是指由声音所制造的效果,是指为增进某一场景的真实感、气氛或戏剧讯息,而增加的杂音或声音。增加的杂音或声音可以包括乐音和效果音。例如:数字音效、环境音效、MP3音效(普通音效、专业音效)等。
因此,音效有时也称为声效(Sound effects或Audio effects)是人工制造或加强的声音,用来增强对电影、电子游戏、音乐或其他媒体的艺术或其他内容的声音处理。场景音效是音效的一个更为具体的应用场景,其涉及的是与当期应用场景相关的音效。
发明内容
本发明实施例提供了一种场景音效的控制方法,包括:
电子设备在被开启后,启动具有监听功能的服务;
所述电子设备通过所述具有监听功能的服务对所述电子设备的音轨进行监听,确定所述电子设备的音轨是否有音频输出;所述电子设备的音轨与所述电子设备内的应用之间具有映射关系;
若所述电子设备确定所述电子设备的音轨有音频输出,则依据所述映射关系确定与所述电子设备的音轨有映射关系的应用;
所述电子设备获取所述应用对应的场景音效,并将所述电子设备当前音效设置为所述场景音效。
二方面本发明实施例还提供了一种电子设备,包括:
监听控制单元,用于在所述电子设备被开启后,启动具有监听功能的服务;
监听单元,用于通过所述具有监听功能的服务对所述电子设备的音轨进行监听,确定所述电子设备的音轨是否有音频输出;所述电子设备的音轨与所述 电子设备内的应用之间具有映射关系;
应用确定单元,用于若所述监听单元确定所述电子设备的音轨有音频输出,则依据所述映射关系确定与所述电子设备的音轨有映射关系的应用;
音效设置单元,用于获取所述应用对应的场景音效,并将所述电子设备当前音效设置为所述场景音效。
三方面本发明实施例还提供了一种电子设备,包括:处理器、存储器以及用于输出场景音效的音频输出设备;所述处理器用于执行本发明实施例提供的任意一项所述的方法。
四方面本发明实施例还提供了一种计算机可读存储介质,其存储用于电子数据交换的计算机程序,其中,所述计算机程序被执行的情况下实现本发明实施例提供的任一项所述的方法。
五方面本发明实施例还提供了一种程序产品,所述计算机程序被执行的情况下实现本发明实施例提供的方法。
从以上技术方案可以看出,本发明实施例具有以下优点:通过对音轨的监听,音轨与应用的映射关系以及场景音效与应用的对应关系,准确确定了电子设备当前所处的应用场景,并准确确定需要的场景音效。该过程不需要人参与场景音效的设置,因此在保证较高场景音效的准确率的前提下,简化操作,提高电子设备的使用效率。
附图说明
下面将对实施例描述中所需要使用的附图作简要介绍。
图1为本发明实施例方法流程示意图;
图2为本发明实施例方法流程示意图;
图3为本发明实施例电子设备结构示意图;
图4为本发明实施例电子设备结构示意图;
图5为本发明实施例电子设备结构示意图;
图6为本发明实施例电子设备结构示意图;
图7为本发明实施例终端设备结构示意图。
具体实施方式
为了使本发明的目的、技术方案和优点更加清楚,下面将结合附图对本发明作进一步地详细描述,显然,所描述的实施例仅仅是本发明一部份实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本发明保护的范围。
如果音效对不同场景的声音处理技术不同的,那么达到的效果也将是不同的,因此可以提高应用场景的识别率并设置对应的音效模式,能大大提高用户听觉体验。
目前场景音效,可以包含:音乐播放应用场景的音效、视频播放场景的音效、等等;场景音效的控制是由人手工设置的,具体流程如下:
电子设备通过交互界面获取人手工输入/或选择的场景音效,然后电子设备将当前的场景音效设置为上述人手工输入/或选择的场景音效。
以上设置场景音效的准确率较高,但是操作麻烦,使电子设备的使用效率较低。
本发明实施例提供了一种场景音效的控制方法,如图1所示,包括:
101:电子设备在被开启后,启动具有监听功能的服务;
在本实施例中,监听功能的服务可以是由硬件实体实现的,也可以基于硬件实体的软件功能,对此本发明实施例不作唯一性限定。
102:上述电子设备通过上述具有监听功能的服务对上述电子设备的音轨进行监听,确定上述电子设备的音轨是否有音频输出;上述电子设备的音轨与上述电子设备内的应用之间具有映射关系;
在音序器软件中看到的一条一条的平行“轨道”是音轨,一条音轨对应于音乐的一个声部。因此,每条音轨定义了该条音轨的属性,如音轨的音色,音色库,通道数,输入/输出端口,音量等。音轨与应用之间的映射关系,可以是一一对应的关系,这里的应用应当理解为广义上的应用,例如:应用软件、应用的客户端、应用的名称、或者应用的类型。这取决于最终场景音效需要对应到什么内容,例如:场景音效对应到一类应用,那么此处应当理解为应用类型;如果场景音效精确对应到某一个应用软件,那么此处应当理解为应用软件。
103:若上述电子设备确定上述电子设备的音轨有音频输出,则依据上述 映射关系确定与上述电子设备的音轨有映射关系的应用;
音轨是否有音频输出,具体如何监控确定,本发明实施例不作唯一性限定。以上记载的监控过程,如果使用软件的服务实现,可以在应用层启动具有监听功能的服务来对音轨进行监听。
104:上述电子设备获取上述应用对应的场景音效,并将上述电子设备当前音效设置为上述场景音效。
场景音效被设置以后,可以使用电子设备的音频输出设备,该音频输出设备可以包含:声卡以及音频输出接口;或者,包含声卡以及扬声器。这里音频输出接口则可以连接到外接的扬声器或者耳机之类的设备。
本发明实施例,通过对音轨的监听,音轨与应用的映射关系以及场景音效与应用的对应关系,准确确定了电子设备当前所处的应用场景,并准确确定需要的场景音效。该过程不需要人参与场景音效的设置,因此在保证较高场景音效的准确率的前提下,简化操作,提高电子设备的使用效率。
在本发明实施例中,可以不必对所有应用都有不同的场景音效,减少场景音效的复杂度,具体如下:上述电子设备的音轨与上述电子设备内的应用之间具有映射关系包括:
上述电子设备的音轨与上述电子设备内的应用的客户端之间具有映射关系。
在本发明实施例中,可以不必对所有应用都有不同的场景音效,减少场景音效的复杂度,具体如下:上述电子设备的音轨与上述电子设备内的应用的客户端之间具有映射关系包括:
上述电子设备的音轨与上述电子设备内的应用的客户端所属的应用类型之间具有映射关系。
本实施例,在实质上是对应用软件进行分类,不同类型的应用软件需要不同的音效,例如:游戏软件与音乐软件,即时战略游戏与休闲游戏,等等。
由于音轨与应用之间的映射关系的准确性对最终场景音效的选用具有较为重要的影响,本发明实施例还提供了如下解决方案:在上述依据上述映射关系确定与上述电子设备的音轨有映射关系的应用之前,上述方法还包括:
上述电子设备与位于网络侧的服务器建立通信连接,并通过上述通信连接 向上述网络侧的服务器发送查询请求,在上述查询请求中携带上述客户端名称或者携带上述应用的名称;在上述网络侧的服务器中保存有客户端的分类信息或者依应用的名称对客户端分类的分类信息;
上述依据上述映射关系确定与上述电子设备的音轨有映射关系的应用包括:
上述电子设备接收来自上述服务器返回的应用类型,上述应用类型由上述网络侧的服务器依据上述客户端的分类信息或者依应用的名称对客户端分类的分类信息确定;
上述电子设备获取上述应用对应的场景音效包括:
上述电子设备获取上述应用类型对应的场景音效。
在本实施例中,应用类型映射场景音效,映射关系存在服务器一侧,方便维护;服务器可以是云端的服务器,该映射关系可以由运营商来负责维护更新,也可以由用户个性化的自己设定。本发明实施例对此不作唯一性限定。以上查询请求可以通过套接字(socket)实现,在网络上的两个程序通过一个双向的通信连接实现数据的交换,这个双向的通信连接的一端称为一个socket。
进一步地,基于用户个性化的需求,本发明实施例提供了如下解决方案:上述方法还包括:
上述电子设备通过上述通信连接更新在上述网络侧的服务器中保存的客户端的分类信息或者依应用的名称对客户端分类的分类信息。
可选地,上述应用的类型包括:音乐播放器、视频播放器、游戏应用中的至少一项;应用的类型与场景音效之间具有对应关系;
上述电子设备获取上述应用对应的场景音效,包括:
上述电子设备依据上述应用的类型与场景音效之间具有的对应关系确定上述应用所属的应用类型对应的场景音效。
可以理解的是,应用的类型可以有很多,以上举例并不是应用的类型的穷举;另外,以上应用的类型也可以进一步细分,例如:游戏应用还可以进一步细分为:即时战略游戏应用和休闲游戏应用等。
如图2所示,本发明实施例还提供了另一个更为具体的方法流程举例说明,包括:
201:收集当前流行的媒体应用,如音乐,视频播放器,游戏等。
202:建可扩展标记语言(Extensible Markup Language,XML)文件分别对音乐,视频,游戏进行归类;
203:将此XML文件放到指定的服务器上;该XML文件可以进行诸如:修改、更新、下载、上传等等相关维护工作;
204:在手机一侧的应用层通过一个服务(service)监听音轨(audiotrack)的创建和释放,并获得audiotrack的音频输出情况;
205:每个audiotrack都会绑定一个客户端(客户端(Client)或称为用户端,是指与服务器相对应,为客户提供本地服务的程序,属于应用软件的一种),将此客户端和我们维护的XML文件中的应用进行对比分析,就知道当前是哪个应用在播放音频了,从而达到识别场景的目的;
206:通过识别的场景设置对应的场景音效。
本发明实施例还提供了一种电子设备,如图3所示,包括:
监听控制单元301,用于在上述电子设备被开启后,启动具有监听功能的服务;
监听单元302,用于通过上述具有监听功能的服务对上述电子设备的音轨进行监听,确定上述电子设备的音轨是否有音频输出;上述电子设备的音轨与上述电子设备内的应用之间具有映射关系;
应用确定单元303,用于若上述监听单元302确定上述电子设备的音轨有音频输出,则依据上述映射关系确定与上述电子设备的音轨有映射关系的应用;
音效设置单元304,用于获取上述应用对应的场景音效,并将上述电子设备当前音效设置为上述场景音效。
在本实施例中,监听功能的服务可以是由硬件实体实现的,也可以基于硬件实体的软件功能,对此本发明实施例不作唯一性限定。
在音序器软件中看到的一条一条的平行“轨道”是音轨,一条音轨对应于音乐的一个声部。因此,每条音轨定义了该条音轨的属性,如音轨的音色,音色库,通道数,输入/输出端口,音量等。音轨与应用之间的映射关系,可以是一一对应的关系,这里的应用应当理解为广义上的应用,例如:应用软件、应 用的客户端、应用的名称、或者应用的类型。这取决于最终场景音效需要对应到什么内容,例如:场景音效对应到一类应用,那么此处应当理解为应用类型;如果场景音效精确对应到某一个应用软件,那么此处应当理解为应用软件。
音轨是否有音频输出,具体如何监控确定,本发明实施例不作唯一性限定。以上记载的监控过程,如果使用软件的服务实现,可以在应用层启动具有监听功能的服务来对音轨进行监听。
场景音效被设置以后,可以使用电子设备的音频输出设备,该音频输出设备可以包含:声卡以及音频输出接口;或者,包含声卡以及扬声器。这里音频输出接口则可以连接到外接的扬声器或者耳机之类的设备。
本发明实施例,通过对音轨的监听,音轨与应用的映射关系以及场景音效与应用的对应关系,准确确定了电子设备当前所处的应用场景,并准确确定需要的场景音效。该过程不需要人参与场景音效的设置,因此在保证较高场景音效的准确率的前提下,简化操作,提高电子设备的使用效率。
在本发明实施例中,可以不必对所有应用都有不同的场景音效,减少场景音效的复杂度,具体如下:上述电子设备的音轨与上述电子设备内的应用之间具有映射关系包括:
上述电子设备的音轨与上述电子设备内的应用的客户端之间具有映射关系。
在本发明实施例中,可以不必对所有应用都有不同的场景音效,减少场景音效的复杂度,具体如下:上述电子设备的音轨与上述电子设备内的应用的客户端之间具有映射关系包括:
上述电子设备的音轨与上述电子设备内的应用的客户端所属的应用类型之间具有映射关系。
本实施例,在实质上是对应用软件进行分类,不同类型的应用软件需要不同的音效,例如:游戏软件与音乐软件,即时战略游戏与休闲游戏,等等。
由于音轨与应用之间的映射关系的准确性对最终场景音效的选用具有较为重要的影响,本发明实施例还提供了如下解决方案:如图4所示,上述电子设备还包括:
连接建立单元401,用于在上述应用确定单元303依据上述映射关系确定 与上述电子设备的音轨有映射关系的应用之前,与位于网络侧的服务器建立通信连接;
请求发送单元402,用于通过上述通信连接向上述网络侧的服务器发送查询请求,在上述查询请求中携带上述客户端名称或者携带上述应用的名称;在上述网络侧的服务器中保存有客户端的分类信息或者依应用的名称对客户端分类的分类信息;
上述应用确定单元303,具体用于接收来自上述服务器返回的应用类型,上述应用类型由上述网络侧的服务器依据上述客户端的分类信息或者依应用的名称对客户端分类的分类信息确定;
上述音效设置单元304,具体用于获取上述应用类型对应的场景音效,并将上述电子设备当前音效设置为上述场景音效。
在本实施例中,应用类型映射场景音效,映射关系存在服务器一侧,方便维护;服务器可以是云端的服务器,该映射关系可以由运营商来负责维护更新,也可以由用户个性化的自己设定。本发明实施例对此不作唯一性限定。以上查询请求可以通过套接字(socket)实现,在网络上的两个程序通过一个双向的通信连接实现数据的交换,这个双向的通信连接的一端称为一个socket。
进一步地,基于用户个性化的需求,本发明实施例提供了如下解决方案:如图5所示,上述电子设备还包括:
信息更新单元501,用于通过上述通信连接更新在上述网络侧的服务器中保存的客户端的分类信息或者依应用的名称对客户端分类的分类信息。
可选地,上述应用的类型包括:音乐播放器、视频播放器、游戏应用中的至少一项;应用的类型与场景音效之间具有对应关系;
上述音效设置单元304,具体用于依据上述应用的类型与场景音效之间具有的对应关系确定上述应用所属的应用类型对应的场景音效,并将上述电子设备当前音效设置为上述场景音效。
可以理解的是,应用的类型可以有很多,以上举例并不是应用的类型的穷举;另外,以上应用的类型也可以进一步细分,例如:游戏应用还可以进一步细分为:即时战略游戏应用和休闲游戏应用等。
本发明实施例还提供了另一种电子设备,如图6所示,包括:处理器601、 存储器602以及用于输出场景音效的音频输出设备603;其中存储设备602可以用于提供处理器601执行数据处理所需要的缓存,也可以进一步提供场景音效的音频数据的存储空间;该场景音效的音频数据可以是来自于网络侧,在电子设备本地的存储器602可以提供下载后的缓存空间;
其中,上述处理器601,用于在电子设备在被开启后,启动具有监听功能的服务;通过上述具有监听功能的服务对上述电子设备的音轨进行监听,确定上述电子设备的音轨是否有音频输出;上述电子设备的音轨与上述电子设备内的应用之间具有映射关系;若确定上述电子设备的音轨有音频输出,则依据上述映射关系确定与上述电子设备的音轨有映射关系的应用;获取上述应用对应的场景音效,并将上述电子设备当前音效设置为上述场景音效。
在本实施例中,监听功能的服务可以是由硬件实体实现的,也可以基于硬件实体的软件功能,对此本发明实施例不作唯一性限定。
在音序器软件中看到的一条一条的平行“轨道”是音轨,一条音轨对应于音乐的一个声部。因此,每条音轨定义了该条音轨的属性,如音轨的音色,音色库,通道数,输入/输出端口,音量等。音轨与应用之间的映射关系,可以是一一对应的关系,这里的应用应当理解为广义上的应用,例如:应用软件、应用的客户端、应用的名称、或者应用的类型。这取决于最终场景音效需要对应到什么内容,例如:场景音效对应到一类应用,那么此处应当理解为应用类型;如果场景音效精确对应到某一个应用软件,那么此处应当理解为应用软件。
音轨是否有音频输出,具体如何监控确定,本发明实施例不作唯一性限定。以上记载的监控过程,如果使用软件的服务实现,可以在应用层启动具有监听功能的服务来对音轨进行监听。
场景音效被设置以后,可以使用电子设备的音频输出设备,该音频输出设备可以包含:声卡以及音频输出接口;或者,包含声卡以及扬声器。这里音频输出接口则可以连接到外接的扬声器或者耳机之类的设备。
本发明实施例,通过对音轨的监听,音轨与应用的映射关系以及场景音效与应用的对应关系,准确确定了电子设备当前所处的应用场景,并准确确定需要的场景音效。该过程不需要人参与场景音效的设置,因此在保证较高场景音效的准确率的前提下,简化操作,提高电子设备的使用效率。
在本发明实施例中,可以不必对所有应用都有不同的场景音效,减少场景音效的复杂度,具体如下:上述电子设备的音轨与上述电子设备内的应用之间具有映射关系包括:
上述电子设备的音轨与上述电子设备内的应用的客户端之间具有映射关系。
在本发明实施例中,可以不必对所有应用都有不同的场景音效,减少场景音效的复杂度,具体如下:上述电子设备的音轨与上述电子设备内的应用的客户端之间具有映射关系包括:
上述电子设备的音轨与上述电子设备内的应用的客户端所属的应用类型之间具有映射关系。
本实施例,在实质上是对应用软件进行分类,不同类型的应用软件需要不同的音效,例如:游戏软件与音乐软件,即时战略游戏与休闲游戏,等等。
由于音轨与应用之间的映射关系的准确性对最终场景音效的选用具有较为重要的影响,本发明实施例还提供了如下解决方案:上述处理器601,还用于在上述依据上述映射关系确定与上述电子设备的音轨有映射关系的应用之前,与位于网络侧的服务器建立通信连接,并通过上述通信连接向上述网络侧的服务器发送查询请求,在上述查询请求中携带上述客户端名称或者携带上述应用的名称;在上述网络侧的服务器中保存有客户端的分类信息或者依应用的名称对客户端分类的分类信息;
上述处理器601,用于依据上述映射关系确定与上述电子设备的音轨有映射关系的应用包括:
接收来自上述服务器返回的应用类型,上述应用类型由上述网络侧的服务器依据上述客户端的分类信息或者依应用的名称对客户端分类的分类信息确定;
上述处理器601,用于获取上述应用对应的场景音效包括:
获取上述应用类型对应的场景音效。
在本实施例中,应用类型映射场景音效,映射关系存在服务器一侧,方便维护;服务器可以是云端的服务器,该映射关系可以由运营商来负责维护更新,也可以由用户个性化的自己设定。本发明实施例对此不作唯一性限定。以上查 询请求可以通过套接字(socket)实现,在网络上的两个程序通过一个双向的通信连接实现数据的交换,这个双向的通信连接的一端称为一个socket。
进一步地,基于用户个性化的需求,本发明实施例提供了如下解决方案:处理器601,还用于上述电子设备通过上述通信连接更新在上述网络侧的服务器中保存的客户端的分类信息或者依应用的名称对客户端分类的分类信息。
可选地,上述应用的类型包括:音乐播放器、视频播放器、游戏应用中的至少一项;应用的类型与场景音效之间具有对应关系;
上述处理器601,用于获取上述应用对应的场景音效,包括:
依据上述应用的类型与场景音效之间具有的对应关系确定上述应用所属的应用类型对应的场景音效。
可以理解的是,应用的类型可以有很多,以上举例并不是应用的类型的穷举;另外,以上应用的类型也可以进一步细分,例如:游戏应用还可以进一步细分为:即时战略游戏应用和休闲游戏应用等。
本发明实施例还提供了一种终端设备,如图7所示,为了便于说明,仅示出了与本发明实施例相关的部分,具体技术细节未揭示的,请参照本发明实施例方法部分。该终端设备可以为包括手机、平板电脑、PDA(Personal Digital Assistant,个人数字助理)、POS(Point of Sales,销售终端)、车载电脑等任意终端设备,以终端设备为手机为例:
图7示出的是与本发明实施例提供的终端设备相关的手机的部分结构的框图。参考图7,手机包括:射频(Radio Frequency,RF)电路710、存储器720、输入单元730、显示单元740、传感器750、音频电路760、无线保真(wireless fidelity,WiFi)模块770、处理器780、以及电源790等部件。本领域技术人员可以理解,图7中示出的手机结构并不构成对手机的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
下面结合图7对手机的各个构成部件进行具体的介绍:
RF电路710可用于收发信息或通话过程中,信号的接收和发送,特别地,将基站的下行信息接收后,给处理器780处理;另外,将设计上行的数据发送给基站。通常,RF电路710包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器(Low Noise Amplifier,LNA)、双工器等。此外,RF 电路710还可以通过无线通信与网络和其他设备通信。上述无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯系统(Global System of Mobile communication,GSM)、通用分组无线服务(General Packet Radio Service,GPRS)、码分多址(Code Division Multiple Access,CDMA)、宽带码分多址(Wideband Code Division Multiple Access,WCDMA)、长期演进(Long Term Evolution,LTE)、电子邮件、短消息服务(Short Messaging Service,SMS)等。
存储器720可用于存储软件程序以及模块,处理器780通过运行存储在存储器720的软件程序以及模块,从而执行手机的各种功能应用以及数据处理。存储器720可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器720可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
输入单元730可用于接收输入的数字或字符信息,以及产生与手机的用户设置以及功能控制有关的键信号输入。具体地,输入单元730可包括触控面板731以及其他输入设备732。触控面板731,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板731上或在触控面板731附近的操作),并根据预先设定的程式驱动相应的连接装置。可选的,触控面板731可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器780,并能接收处理器780发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板731。除了触控面板731,输入单元730还可以包括其他输入设备732。具体地,其他输入设备732可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。
显示单元740可用于显示由用户输入的信息或提供给用户的信息以及手 机的各种菜单。显示单元740可包括显示面板741,可选的,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板741。进一步的,触控面板731可覆盖显示面板741,当触控面板731检测到在其上或附近的触摸操作后,传送给处理器780以确定触摸事件的类型,随后处理器780根据触摸事件的类型在显示面板741上提供相应的视觉输出。虽然在图7中,触控面板731与显示面板741是作为两个独立的部件来实现手机的输入和输入功能,但是在某些实施例中,可以将触控面板731与显示面板741集成而实现手机的输入和输出功能。
手机还可包括至少一种传感器750,比如光传感器、运动传感器以及其他传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板741的亮度,接近传感器可在手机移动到耳边时,关闭显示面板741和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别手机姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于手机还可配置的陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。
音频电路760、扬声器761,传声器762可提供用户与手机之间的音频接口。音频电路760可将接收到的音频数据转换后的电信号,传输到扬声器761,由扬声器761转换为声音信号输出;另一方面,传声器762将收集的声音信号转换为电信号,由音频电路760接收后转换为音频数据,再将音频数据输出处理器780处理后,经RF电路710以发送给比如另一手机,或者将音频数据输出至存储器720以便进一步处理。
WiFi属于短距离无线传输技术,手机通过WiFi模块770可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图7示出了WiFi模块770,但是可以理解的是,其并不属于手机的必须构成,完全可以根据需要在不改变发明的本质的范围内而省略。
处理器780是手机的控制中心,利用各种接口和线路连接整个手机的各个部分,通过运行或执行存储在存储器720内的软件程序和/或模块,以及调用 存储在存储器720内的数据,执行手机的各种功能和处理数据,从而对手机进行整体监控。可选的,处理器780可包括一个或多个处理单元;优选的,处理器780可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器780中。
手机还包括给各个部件供电的电源790(比如电池),优选的,电源可以通过电源管理系统与处理器780逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
尽管未示出,手机还可以包括摄像头、蓝牙模块等,在此不再赘述。
在本发明实施例中,该终端设备所包括的处理器780还具有对应前述实施例处理器601的功能。
值得注意的是,上述电子设备实施例中,所包括的各个单元只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,各功能单元的具体名称也只是为了便于相互区分,并不用于限制本发明的保护范围。
另外,本领域普通技术人员可以理解实现上述各方法实施例中的全部或部分步骤是可以通过程序来指令相关的硬件完成,相应的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。

Claims (20)

  1. 一种场景音效的控制方法,其特征在于,包括:
    电子设备在被开启后,启动具有监听功能的服务;
    所述电子设备通过所述具有监听功能的服务对所述电子设备的音轨进行监听,确定所述电子设备的音轨是否有音频输出;所述电子设备的音轨与所述电子设备内的应用之间具有映射关系;
    若所述电子设备确定所述电子设备的音轨有音频输出,则依据所述映射关系确定与所述电子设备的音轨有映射关系的应用;
    所述电子设备获取所述应用对应的场景音效,并将所述电子设备当前音效设置为所述场景音效。
  2. 根据权利要求1所述方法,其特征在于,所述电子设备的音轨与所述电子设备内的应用之间具有映射关系包括:
    所述电子设备的音轨与所述电子设备内的应用的客户端之间具有映射关系。
  3. 根据权利要求2所述方法,其特征在于,所述电子设备的音轨与所述电子设备内的应用的客户端之间具有映射关系包括:
    所述电子设备的音轨与所述电子设备内的应用的客户端所属的应用类型之间具有映射关系。
  4. 根据权利要求3所述方法,其特征在于,在所述依据所述映射关系确定与所述电子设备的音轨有映射关系的应用之前,所述方法还包括:
    所述电子设备与位于网络侧的服务器建立通信连接,并通过所述通信连接向所述网络侧的服务器发送查询请求,在所述查询请求中携带所述客户端名称或者携带所述应用的名称;在所述网络侧的服务器中保存有客户端的分类信息或者依应用的名称对客户端分类的分类信息;
    所述依据所述映射关系确定与所述电子设备的音轨有映射关系的应用包括:
    所述电子设备接收来自所述服务器返回的应用类型,所述应用类型由所述网络侧的服务器依据所述客户端的分类信息或者依应用的名称对客户端分类的分类信息确定;
    所述电子设备获取所述应用对应的场景音效包括:
    所述电子设备获取所述应用类型对应的场景音效。
  5. 根据权利要求4所述方法,其特征在于,所述方法还包括:
    所述电子设备通过所述通信连接更新在所述网络侧的服务器中保存的客户端的分类信息或者依应用的名称对客户端分类的分类信息。
  6. 根据权利要求1至5任意一项所述方法,其特征在于,所述应用的类型包括:音乐播放器、视频播放器、游戏应用中的至少一项;应用的类型与场景音效之间具有对应关系;
    所述电子设备获取所述应用对应的场景音效,包括:
    所述电子设备依据所述应用的类型与场景音效之间具有的对应关系确定所述应用所属的应用类型对应的场景音效。
  7. 根据权利要求1所述方法,其特征在于,所述场景音效包括:音乐,视频,游戏的场景音效;所述方法还包括:建可扩展标记语言XML文件,分别对音乐,视频,游戏进行归类;将所述XML文件发送到指定的服务器。
  8. 根据权利要求7所述方法,其特征在于,所述电子设备通过所述具有监听功能的服务对所述电子设备的音轨进行监听,确定所述电子设备的音轨是否有音频输出包括:
    所述电子设备在应用层通过服务监听音轨的创建和释放,获得音轨的音频输出情况。
  9. 根据权利要求8所述方法,其特征在于,所述电子设备的音轨与所述电子设备内的应用之间具有映射关系包括:每个音轨绑定有一个应用;
    所述获取所述应用对应的场景音效包括:
    依据所述XML文件确定所述创建的音轨对应的场景音效。
  10. 一种场景音效的控制装置,其特征在于,包括:
    监听控制单元,用于在电子设备被开启后,启动具有监听功能的服务;
    监听单元,用于通过所述具有监听功能的服务对所述电子设备的音轨进行监听,确定所述电子设备的音轨是否有音频输出;所述电子设备的音轨与所述电子设备内的应用之间具有映射关系;
    应用确定单元,用于若所述监听单元确定所述电子设备的音轨有音频输 出,则依据所述映射关系确定与所述电子设备的音轨有映射关系的应用;
    音效设置单元,用于获取所述应用对应的场景音效,并将所述电子设备当前音效设置为所述场景音效。
  11. 根据权利要求7所述场景音效的控制装置,其特征在于,所述电子设备的音轨与所述电子设备内的应用之间具有映射关系包括:
    所述电子设备的音轨与所述电子设备内的应用的客户端之间具有映射关系。
  12. 根据权利要求8所述场景音效的控制装置,其特征在于,所述电子设备的音轨与所述电子设备内的应用的客户端之间具有映射关系包括:
    所述电子设备的音轨与所述电子设备内的应用的客户端所属的应用类型之间具有映射关系。
  13. 根据权利要求9所述场景音效的控制装置,其特征在于,所述场景音效的控制装置还包括:
    连接建立单元,用于在所述应用确定单元依据所述映射关系确定与所述电子设备的音轨有映射关系的应用之前,与位于网络侧的服务器建立通信连接;
    请求发送单元,用于通过所述通信连接向所述网络侧的服务器发送查询请求,在所述查询请求中携带所述客户端名称或者携带所述应用的名称;在所述网络侧的服务器中保存有客户端的分类信息或者依应用的名称对客户端分类的分类信息;
    所述应用确定单元,具体用于接收来自所述服务器返回的应用类型,所述应用类型由所述网络侧的服务器依据所述客户端的分类信息或者依应用的名称对客户端分类的分类信息确定;
    所述音效设置单元,具体用于获取所述应用类型对应的场景音效,并将所述电子设备当前音效设置为所述场景音效。
  14. 根据权利要求10所述场景音效的控制装置,其特征在于,所述场景音效的控制装置还包括:
    信息更新单元,用于通过所述通信连接更新在所述网络侧的服务器中保存的客户端的分类信息或者依应用的名称对客户端分类的分类信息。
  15. 根据权利要求7至11任意一项所述场景音效的控制装置,其特征在 于,所述应用的类型包括:音乐播放器、视频播放器、游戏应用中的至少一项;应用的类型与场景音效之间具有对应关系;
    所述音效设置单元,具体用于依据所述应用的类型与场景音效之间具有的对应关系确定所述应用所属的应用类型对应的场景音效,并将所述电子设备当前音效设置为所述场景音效。
  16. 根据权利要求11所述场景音效的控制装置,其特征在于,所述场景音效包括:音乐,视频,游戏的场景音效;所述电子设还包括:
    文件维护单元,用于创建可扩展标记语言XML文件,分别对音乐,视频,游戏进行归类;将所述XML文件发送到指定的服务器。
  17. 根据权利要求16所述场景音效的控制装置,其特征在于,
    所述监听单元,用于在应用层通过服务监听音轨的创建和释放,获得音轨的音频输出情况;
    所述电子设备的音轨与所述电子设备内的应用之间具有映射关系包括:每个音轨绑定有一个应用;
    所述音效设置单元,用于获取所述应用对应的场景音效包括:依据所述XML文件确定所述创建的音轨对应的场景音效。
  18. 一种电子设备,包括:处理器、存储器以及用于输出场景音效的音频输出设备;其特征在于,所述处理器用于执行权利要求1~6任意一项所述的方法。
  19. 一种计算机可读存储介质,其特征在于,其存储用于电子数据交换的计算机程序,其中,所述计算机程序被执行的情况下实现如权利要求1~9任一项所述的方法。
  20. 一种程序产品,其特征在于,所述计算机程序被执行的情况下实现如权利要求1~9任一项所述的方法。
PCT/CN2017/088788 2016-06-16 2017-06-16 一种场景音效的控制方法、及电子设备 WO2017215660A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP17812771.8A EP3441874B1 (en) 2016-06-16 2017-06-16 Scene sound effect control method, and electronic device
US16/094,496 US10891102B2 (en) 2016-06-16 2017-06-16 Scene sound effect control method, and electronic device
US16/430,605 US10817255B2 (en) 2016-06-16 2019-06-04 Scene sound effect control method, and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610447232.0A CN106126174B (zh) 2016-06-16 2016-06-16 一种场景音效的控制方法、及电子设备
CN201610447232.0 2016-06-16

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US16/094,496 A-371-Of-International US10891102B2 (en) 2016-06-16 2017-06-16 Scene sound effect control method, and electronic device
US16/430,605 Continuation US10817255B2 (en) 2016-06-16 2019-06-04 Scene sound effect control method, and electronic device

Publications (1)

Publication Number Publication Date
WO2017215660A1 true WO2017215660A1 (zh) 2017-12-21

Family

ID=57471014

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/088788 WO2017215660A1 (zh) 2016-06-16 2017-06-16 一种场景音效的控制方法、及电子设备

Country Status (4)

Country Link
US (2) US10891102B2 (zh)
EP (1) EP3441874B1 (zh)
CN (1) CN106126174B (zh)
WO (1) WO2017215660A1 (zh)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105959481B (zh) * 2016-06-16 2019-04-30 Oppo广东移动通信有限公司 一种场景音效的控制方法、及电子设备
CN106126174B (zh) 2016-06-16 2019-02-22 Oppo广东移动通信有限公司 一种场景音效的控制方法、及电子设备
CN106775562A (zh) * 2016-12-09 2017-05-31 奇酷互联网络科技(深圳)有限公司 音频参数处理的方法及装置
CN109165005B (zh) * 2018-09-04 2020-08-25 Oppo广东移动通信有限公司 音效增强方法、装置、电子设备及存储介质
WO2020103076A1 (zh) * 2018-11-22 2020-05-28 深圳市欢太科技有限公司 音频播放处理方法、装置、终端和计算机可读存储介质
US20200310736A1 (en) * 2019-03-29 2020-10-01 Christie Digital Systems Usa, Inc. Systems and methods in tiled display imaging systems
CN112233647A (zh) * 2019-06-26 2021-01-15 索尼公司 信息处理设备和方法以及计算机可读存储介质
CN112791407A (zh) * 2021-01-15 2021-05-14 网易(杭州)网络有限公司 一种音效控制方法及装置
CN115904303A (zh) * 2021-05-21 2023-04-04 荣耀终端有限公司 一种播放声音的方法及设备
CN114900505B (zh) * 2022-04-18 2024-01-30 广州市迪士普音响科技有限公司 一种基于web的音频场景定时切换方法、装置及介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130044883A1 (en) * 2005-06-03 2013-02-21 Apple Inc. Techniques for presenting sound effects on a portable media player
EP2760175A1 (en) * 2013-01-24 2014-07-30 HTC Corporation Scene-sound set operating method and portable device
CN104410748A (zh) * 2014-10-17 2015-03-11 广东小天才科技有限公司 一种根据移动终端位置添加背景音效的方法及移动终端
CN104778067A (zh) * 2015-04-27 2015-07-15 努比亚技术有限公司 启动音效的方法及终端设备
CN105468328A (zh) * 2014-09-03 2016-04-06 联想(北京)有限公司 一种信息处理方法及电子设备
CN105554548A (zh) * 2015-12-08 2016-05-04 深圳Tcl数字技术有限公司 音频数据输出方法及装置
CN106126174A (zh) * 2016-06-16 2016-11-16 广东欧珀移动通信有限公司 一种场景音效的控制方法、及电子设备

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8768494B1 (en) * 2003-12-22 2014-07-01 Nvidia Corporation System and method for generating policy-based audio
US8406435B2 (en) * 2005-03-18 2013-03-26 Microsoft Corporation Audio submix management
US20070015121A1 (en) * 2005-06-02 2007-01-18 University Of Southern California Interactive Foreign Language Teaching
US9002885B2 (en) * 2010-09-16 2015-04-07 Disney Enterprises, Inc. Media playback in a virtual environment
CN102685712B (zh) * 2011-03-09 2016-08-03 中兴通讯股份有限公司 一种身份位置分离网络中的映射服务器及其实现方法
US8838261B2 (en) * 2011-06-03 2014-09-16 Apple Inc. Audio configuration based on selectable audio modes
CN103893971B (zh) 2012-12-25 2015-05-27 腾讯科技(深圳)有限公司 一种游戏音效的制作方法和客户端
US9519708B2 (en) * 2013-05-29 2016-12-13 Microsoft Technology Licensing, Llc Multiple concurrent audio modes
CN103841495B (zh) * 2014-03-03 2019-11-26 联想(北京)有限公司 一种音频参数调整方法及装置
CN104883642B (zh) 2015-03-27 2018-09-25 成都上生活网络科技有限公司 一种音效调节方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130044883A1 (en) * 2005-06-03 2013-02-21 Apple Inc. Techniques for presenting sound effects on a portable media player
EP2760175A1 (en) * 2013-01-24 2014-07-30 HTC Corporation Scene-sound set operating method and portable device
CN105468328A (zh) * 2014-09-03 2016-04-06 联想(北京)有限公司 一种信息处理方法及电子设备
CN104410748A (zh) * 2014-10-17 2015-03-11 广东小天才科技有限公司 一种根据移动终端位置添加背景音效的方法及移动终端
CN104778067A (zh) * 2015-04-27 2015-07-15 努比亚技术有限公司 启动音效的方法及终端设备
CN105554548A (zh) * 2015-12-08 2016-05-04 深圳Tcl数字技术有限公司 音频数据输出方法及装置
CN106126174A (zh) * 2016-06-16 2016-11-16 广东欧珀移动通信有限公司 一种场景音效的控制方法、及电子设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3441874A4 *

Also Published As

Publication number Publication date
US20190102141A1 (en) 2019-04-04
EP3441874A1 (en) 2019-02-13
EP3441874A4 (en) 2019-05-29
US20190286411A1 (en) 2019-09-19
US10891102B2 (en) 2021-01-12
US10817255B2 (en) 2020-10-27
CN106126174B (zh) 2019-02-22
CN106126174A (zh) 2016-11-16
EP3441874B1 (en) 2023-04-26

Similar Documents

Publication Publication Date Title
WO2017215660A1 (zh) 一种场景音效的控制方法、及电子设备
WO2017206916A1 (zh) 处理器中内核运行配置的确定方法以及相关产品
CN107659637B (zh) 音效设置方法、装置、存储介质以及终端
WO2017215661A1 (zh) 一种场景音效的控制方法、及电子设备
WO2021083168A1 (zh) 视频分享方法及电子设备
US10951557B2 (en) Information interaction method and terminal
WO2021204045A1 (zh) 音频的控制方法及电子设备
US10675541B2 (en) Control method of scene sound effect and related products
TWI512525B (zh) 關聯終端的方法及系統、終端及電腦可讀取儲存介質
WO2020156123A1 (zh) 信息处理方法及终端设备
TW201715496A (zh) 多媒體海報生成方法及裝置
WO2017215635A1 (zh) 一种音效处理方法及移动终端
WO2021068885A1 (zh) 控制方法及电子设备
WO2021104251A1 (zh) 控制方法及第一电子设备
JP7229365B2 (ja) 権限管理方法及び端末機器
WO2021012908A1 (zh) 消息发送方法及移动终端
JP7324949B2 (ja) アプリケーション共有方法、第1電子機器及びコンピュータ可読記憶媒体
WO2019076377A1 (zh) 图像的查看方法及移动终端
WO2021083090A1 (zh) 消息发送方法及移动终端
CN110673770A (zh) 消息展示方法及终端设备
CN108429805B (zh) 一种文件下载处理方法、发送终端及接收终端
CN105159655B (zh) 行为事件的播放方法和装置
WO2015117550A1 (en) Method and apparatus for acquiring reverberated wet sound
US10853412B2 (en) Scenario-based sound effect control method and electronic device
WO2021057282A1 (zh) 应用共享方法及终端

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 2017812771

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2017812771

Country of ref document: EP

Effective date: 20181106

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17812771

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE