WO2023230782A1 - 一种音效控制方法、装置及存储介质 - Google Patents

一种音效控制方法、装置及存储介质 Download PDF

Info

Publication number
WO2023230782A1
WO2023230782A1 PCT/CN2022/096053 CN2022096053W WO2023230782A1 WO 2023230782 A1 WO2023230782 A1 WO 2023230782A1 CN 2022096053 W CN2022096053 W CN 2022096053W WO 2023230782 A1 WO2023230782 A1 WO 2023230782A1
Authority
WO
WIPO (PCT)
Prior art keywords
audio
signal
video
sound effect
training
Prior art date
Application number
PCT/CN2022/096053
Other languages
English (en)
French (fr)
Inventor
余俊飞
史润宇
郭锴槟
贺天睿
Original Assignee
北京小米移动软件有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京小米移动软件有限公司 filed Critical 北京小米移动软件有限公司
Priority to PCT/CN2022/096053 priority Critical patent/WO2023230782A1/zh
Priority to CN202280004323.0A priority patent/CN117501363A/zh
Publication of WO2023230782A1 publication Critical patent/WO2023230782A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams

Definitions

  • the present disclosure relates to the field of audio processing, and in particular to sound effect control methods, devices and storage media.
  • smart devices such as mobile phones and speakers control sound effects through human subjective selection.
  • the sound effect mode is artificially selected
  • the sound effect controller adjusts parameters according to the artificially selected sound effect mode
  • the audio file settings and microphone settings are adjusted according to the parameters. to play the adjusted audio.
  • people subjectively choose to control sound effects The operation is relatively complex and the sound effect mode is relatively single. As a result, the audio content and the environment of the device cannot be perceived, and effective and convenient intelligent adjustment of the playback sound effects cannot be made.
  • the present disclosure provides a sound effect control method, device and storage medium.
  • a sound effect control method is provided, applied to a terminal, including:
  • the first audio signal is the audio signal in the video to be played in the terminal.
  • the second audio signal at least includes the first audio signal and environmental audio.
  • signal, the video signal is the video signal in the video to be played;
  • determining sound effect control information based on the second audio signal and video signal includes:
  • the target sound effect control information is determined based on the output result of the sound effect control information generation model.
  • the sound effect control information generation model is pre-trained in the following manner:
  • the audio training signals at least include audio training signals played by the terminal and environmental audio training signals
  • the video training signals include video training signals played by the terminal
  • the multi-modal deep learning model with training convergence is used as a sound effect control information generation model.
  • training a multi-modal deep learning model based on the audio training signal, the video training signal and preset audio control information includes:
  • Preprocessing the acquired video signal is performing nearest neighbor upsampling on the video training signal to obtain a sampled video frame aligned with the audio frame;
  • the multi-modal deep learning model is trained based on the audio frames and sampled video frames.
  • training the multi-modal deep learning model based on the audio frames and sampled video frames includes:
  • the multi-modal deep learning model is trained.
  • a sound effect control device is applied to a terminal, including:
  • An acquisition unit acquires a first audio signal, a second audio signal and a video signal.
  • the first audio signal is an audio signal in a video to be played in the terminal, and the second audio signal at least includes the first audio signal.
  • environmental audio signals, the video signals are the video signals in the video to be played;
  • a determining unit that determines target sound effect control information based on the second audio signal and the video signal
  • a playback unit controls the terminal to play the sound effect of the first audio signal according to the target sound effect control information.
  • the determining unit determines the sound effect control information based on the second audio signal and the video signal in the following manner:
  • the target sound effect control information is determined based on the output result of the sound effect control information generation model.
  • the sound effect control information generation model of the determination unit is pre-trained in the following manner:
  • the audio training signals at least include audio training signals played by the terminal and environmental audio training signals
  • the video training signals include video training signals played by the terminal
  • the multi-modal deep learning model with training convergence is used as a sound effect control information generation model.
  • the determining unit trains the multi-modal deep learning model based on the audio training signal, the video training signal and the preset audio control information in the following manner:
  • Preprocessing the acquired video signal is performing nearest neighbor upsampling on the video training signal to obtain a sampled video frame aligned with the audio frame;
  • the multi-modal deep learning model is trained based on the audio frames and sampled video frames.
  • the determining unit trains the multi-modal deep learning model based on the audio frames and sampled video frames in the following manner:
  • the multi-modal deep learning model is trained.
  • a sound effect control device including:
  • Memory used to store instructions executable by the processor
  • the processor is configured to: execute the sound effect control method described in the first aspect or any implementation manner of the first aspect.
  • a non-transitory computer-readable storage medium which when instructions in the storage medium are executed by a processor of a mobile terminal, enables the mobile terminal to perform the first aspect or the third aspect.
  • the sound effect control method described in any embodiment is provided, which when instructions in the storage medium are executed by a processor of a mobile terminal, enables the mobile terminal to perform the first aspect or the third aspect.
  • the technical solution provided by the embodiments of the present disclosure may include the following beneficial effects: acquiring a first audio signal, a second audio signal and a video signal, where the first audio signal is an audio signal in a video to be played in the terminal, and the second audio signal at least includes The first audio signal and the ambient audio signal.
  • the video signal is the video signal in the video to be played.
  • the target sound effect control information is determined, and the terminal is controlled to play the first audio according to the target sound effect control information.
  • the sound effect of the signal can dynamically and intelligently adjust audio parameters such as playback volume and sound pitch, improve the environmental adaptability of smart devices in sound effect control, and enable users to obtain the best audio-visual experience.
  • Figure 1 is a flow chart of a sound effect control method according to an exemplary embodiment.
  • Figure 2 is a flow chart of a method for determining sound effect control information according to an exemplary embodiment.
  • FIG. 3 is a method flow chart illustrating a sound effect control information generation model according to an exemplary embodiment.
  • Figure 4 is a flow chart of a method for training a multi-modal deep learning model according to an exemplary embodiment.
  • Figure 5 is a flow chart of a method for training a multi-modal deep learning model according to an exemplary embodiment.
  • FIG. 6 shows a flow chart of a method for extracting logarithmic mel spectrum signal features of an audio frame according to an exemplary embodiment of the present disclosure.
  • Figure 7 is a block diagram of an audio control device according to an exemplary embodiment.
  • FIG. 8 is a block diagram of a device for sound effect control according to an exemplary embodiment.
  • the sound effect control method provided by the embodiments of the present disclosure can be applied to smart devices such as mobile phones and tablets, and dynamically adjusts the sound effect intelligently according to the audio playback content and the environment where the device is located, thereby improving the environmental adaptability of smart devices in sound effect control, so that Users get a better audio-visual experience.
  • the method of controlling sound effects is to control the sound effects in a subjective way, which can control the effects of audio signals in terms of echo, reverberation, balance, etc.
  • the echo processing module and the reverberation processing module of the artificial control sound , equalization processing module, etc. or manually select pre-adjusted sound effect control effects.
  • the sound effect controller adjusts the audio file settings and microphone settings according to the parameters, so that the audio played during audio playback is the audio that has been processed by sound effects.
  • the sound effects can be intelligently adjusted according to the environment, or the sound effects can be intelligently adjusted according to the device playback environment and audio and video content.
  • the device obtains a first audio signal, a second audio signal and a video signal.
  • the first audio signal is the audio signal in the video to be played by the terminal.
  • the second audio signal at least includes the first audio signal and the environmental audio signal.
  • the video signal is the video signal in the video to be played.
  • the audio and video data are characterized by extraction and then transmitted to the sound effect control information generation model, which is generated according to the sound effect control information.
  • the output result of the model determines the target sound effect control information, and plays the audio signal according to the target sound effect control information.
  • the sound effect of the audio to be played can be intelligently adjusted according to the environment of the device and the content of the video to be played.
  • the operation is simple, and it can adapt to the environment of the device in real time, allowing users to obtain a better audio-visual experience.
  • Figure 1 is a flow chart of a sound effect control method according to an exemplary embodiment. As shown in Figure 1, the sound effect control method is applied to the terminal and includes the following steps.
  • a first audio signal, a second audio signal and a video signal are obtained.
  • the first audio signal is the audio signal in the video to be played in the terminal.
  • the second audio signal at least includes the first audio signal and the ambient audio signal.
  • the video The signal is the video signal in the video to be played.
  • step S12 target sound effect control information is determined based on the second audio signal and the video signal.
  • step S13 the terminal is controlled to play the sound effect of the first audio signal according to the target sound effect control information.
  • three signals need to be obtained, namely a first audio signal, a second audio signal and a video signal.
  • the audio signal in the video to be played in the terminal is the first audio signal
  • the second audio signal at least includes the audio signal in the terminal.
  • the audio signal and the environmental audio signal in the video to be played that is, the second audio signal at least includes the first audio signal and the environmental sound signal.
  • the method of obtaining the first audio signal and the second audio signal may be, for example, turning on the device microphone for acquisition.
  • the video signal is a video signal in the video to be played, and the video signal may be obtained by, for example, the terminal intercepting the currently played video.
  • the target sound effect control information is determined based on the second audio signal and the video signal, and the terminal plays the sound effect of the first audio signal based on the target sound effect control information. That is, the terminal controls the sound effect of the first audio signal based on the target sound effect control information.
  • the target sound effect control information adjusts the coefficients of the echo processing, reverberation processing, equalization processing and other processors of the sound, and controls the effects of the audio signal in aspects such as echo, reverberation, and equalization.
  • the target sound effect control information adjusts the playback order of each sound. , time, rate and intensity, so that the audio can produce surround sound, stereo and other effects and play them.
  • the sound effect control method provided can obtain the environmental audio of the device, and include the environmental audio in the factors of the sound effect control information, so that the sound effect of the audio to be played can be more intelligently adjusted.
  • the sound effect control information needs to be determined.
  • FIG. 2 is a flow chart of a method for determining sound effect control information according to an exemplary embodiment. As shown in Figure 2, determining sound effect control information based on the second audio signal and the video signal includes the following steps.
  • step S21 the second audio signal and video signal are input to the sound effect control information generation model.
  • the sound effect control information generation model is pre-trained based on the audio training signal played by the terminal, the environmental audio training signal and the video training signal played by the terminal.
  • step S22 target sound effect control information is determined based on the output result of the sound effect control information generation model.
  • the target sound effect control information is obtained by inputting the second audio signal and the video signal into the sound effect control information generation model, and the output of the model is the target sound effect control information.
  • the sound effect control information generation model is pre-trained based on the audio training signal played by the terminal, the environmental audio training signal and the video training signal played by the terminal.
  • the environmental audio training signals can include many types, for example, environmental training signals with noisy voices, environmental training signals with busy traffic, environmental training signals at construction sites, environmental training signals in elevators, quiet environmental training signals, etc.
  • the sound effect control information generation model outputs target sound effect control information adapted to the noisy human voice environment according to the second audio signal and the video signal, thereby obtaining the target sound effect control information.
  • the target sound effect control information can be adjusted dynamically and intelligently, thereby making the user more comfortable to use.
  • the sound effect control information generation model needs to be pre-trained.
  • FIG. 3 is a method flow chart illustrating a sound effect control information generation model according to an exemplary embodiment. As shown in Figure 3, the pre-training of the sound effect control information generation model includes the following steps.
  • step S31 an audio training signal and a video training signal are obtained.
  • the audio training signal at least includes the audio training signal played by the terminal and the environmental audio training signal.
  • the video training signal includes the video training signal played by the terminal.
  • step S32 the multi-modal deep learning model is trained based on the audio training signal, the video training signal and the preset audio control information until convergence.
  • step S33 the multi-modal deep learning model that has been trained and converged is used as the sound effect control information generation model.
  • the sound effect control information generation model is pre-trained. Pre-training the sound effect control information generation model requires obtaining audio training signals and video training signals, where the audio training signals at least include audio training signals and environmental audio played by the terminal. Training signal, video training signal includes video training signal played by the terminal. According to the audio training signal, video training signal and preset audio control information, the multi-modal deep learning model is trained until convergence, and the multi-modal deep learning model with training convergence is used as the sound effect control information generation model.
  • the sound effect control method provided can realize real-time control and processing of sound effects, so that the user has a good usage experience.
  • a multi-modal deep learning model needs to be trained.
  • Figure 4 is a flow chart of a method for training a multi-modal deep learning model according to an exemplary embodiment. As shown in Figure 4, training the multi-modal deep learning model based on the audio training signal, video training signal and preset audio control information includes the following steps.
  • step S41 noise reduction processing is performed on the audio training signal, and the audio training signal after noise reduction processing is divided into equal-length audio frames according to a preset frame length.
  • the audio training signal is subjected to noise reduction processing, wherein the noise reduction processing includes inputting the audio training signal to an adaptive filter.
  • the adaptive filter can be performed using an FIR filter and a time domain adaptive filtering method. Designed to evenly divide the denoised audio training signal into multiple audio frames of equal duration. Among them, the audio frames are divided into equal-duration audio frames, which can be, for example, 3 seconds of equal-duration audio frames. If the divided duration is longer than 3 seconds, the user will have a better listening experience. If the divided duration is less than 3 seconds, then the user will have a better listening experience. The recognition rate of audio training signals is higher.
  • step S42 the acquired video signal is preprocessed, and the preprocessing is to perform nearest neighbor upsampling on the video training signal to obtain a sampled video frame aligned with the audio frame.
  • the video training signal may be obtained by, for example, transmitting through a terminal or video signals recorded by a camera installed on the terminal. Preprocess the acquired video signal. The preprocessing is to perform nearest neighbor upsampling on the video signal to obtain a sampled video frame aligned with the audio frame. The nearest neighbor upsampling is to perform nearest neighbor upsampling on the image signal at adjacent moments in the video training signal. Copy until the number of frames of the video training signal is equal to the number of frames of the audio training signal.
  • step S43 the multi-modal deep learning model is trained based on the audio frames and sampled video frames.
  • the multi-modal deep learning model is trained based on audio frames and sampled video frames.
  • the multi-modal deep learning model provided can dynamically process the sound effect adjustment of audio playback in various scenarios.
  • Figure 5 is a flow chart of a method for training a multi-modal deep learning model according to an exemplary embodiment. As shown in Figure 5, training a multi-modal deep learning model based on audio frames and sampled video frames includes the following steps.
  • step S51 logarithmic mel spectrum audio signal features of the audio frame are extracted, and high-dimensional video signal features of the sampled video frames are extracted.
  • Figure 6 shows the flow of a method for extracting the logarithmic mel spectrum signal feature of the audio frame according to an exemplary embodiment of the present disclosure.
  • the frequency domain calculation formula of m is:
  • the maximum value does not exceed the number of sampling points of the audio training signal, and the maximum value of k is related to the terminal where it is located.
  • the amplitude spectrum S_pow is convolved with the Mel filter and its logarithmic result is calculated to obtain the logarithmic Mel spectrum feature.
  • the calculation formula is: in is the convolution operator.
  • the high-dimensional video signal features of the sampled video frames are extracted, specifically, a deep learning network is used to extract the sampled video frames into high-dimensional video signal features.
  • step S52 a multi-layer convolutional neural network is used to perform high-dimensional mapping on the logarithmic mel spectrum audio signal features and high-dimensional video signal features, and feature fusion is performed on the mapped audio signal features and video signal features. Get fused features.
  • a multi-layer convolutional neural network is used to perform high-dimensional mapping on logarithmic mel spectrum audio signal features and high-dimensional video signal features, map them to higher-dimensional features, and map the mapped audio signal features Feature fusion is performed with video signal features.
  • the feature fusion method can be through BLSTM (Bi-directional Long Short Term Memory, bidirectional long short-term memory network) to obtain the fusion features.
  • step S53 the multi-modal deep learning model is trained based on the fusion features.
  • the multi-modal deep learning model is trained based on fusion features, which include mapped audio signal features and video signal features.
  • further training of the multi-modal deep learning model can better adjust the generation of audio control information according to the video playback content, so that the audio controlled by the sound effect control method better conforms to the video playback content.
  • an embodiment of the present disclosure also provides an audio control device.
  • the audio control device provided by the embodiment of the present disclosure includes hardware structures and/or software modules corresponding to each function.
  • the embodiments of the present disclosure can be implemented in the form of hardware or a combination of hardware and computer software. Whether a certain function is performed by hardware or computer software driving the hardware depends on the specific application and design constraints of the technical solution. Those skilled in the art can use different methods to implement the described functions for each specific application, but such implementation should not be considered to go beyond the scope of the technical solutions of the embodiments of the present disclosure.
  • FIG. 7 is a block diagram of an audio control device according to an exemplary embodiment.
  • the audio control device 100 includes an acquisition unit 101 , a determination unit 102 and a playback unit 103 .
  • the acquisition unit 101 acquires a first audio signal, a second audio signal and a video signal.
  • the first audio signal is the audio signal in the video to be played in the terminal.
  • the second audio signal at least includes the first audio signal and the ambient audio signal.
  • the video signal It is the video signal in the video to be played;
  • the determining unit 102 determines the target sound effect control information based on the second audio signal and the video signal;
  • the playback unit 103 controls the terminal to play the sound effect of the first audio signal according to the target sound effect control information.
  • the determining unit 102 determines the sound effect control information based on the second audio signal and the video signal in the following manner: the second audio signal and the video signal are input to the sound effect control information generation model, and the sound effect control information generation model is based on the terminal The played audio training signal, the environmental audio training signal and the video training signal played by the terminal are pre-trained; based on the output result of the sound effect control information generation model, the target sound effect control information is determined.
  • the sound effect control information generation model of the determination unit 102 is pre-trained in the following manner:
  • the audio training signals at least include audio training signals and environmental audio training signals played by the terminal.
  • the video training signals include video training signals played by the terminal; based on the audio training signals, video training signals and preset audio Control information, train the multi-modal deep learning model until convergence; use the multi-modal deep learning model with converged training as a sound effect control information generation model.
  • the determining unit 102 trains the multi-modal deep learning model based on the audio training signal, the video training signal and the preset audio control information in the following manner: performing noise reduction processing on the audio training signal, and The audio training signal after noise reduction is divided into audio frames of equal duration according to the preset frame length; the acquired video signal is preprocessed, and the preprocessing is nearest neighbor upsampling of the video training signal to obtain samples aligned with the audio frame Video frames; multimodal deep learning models are trained based on audio frames and sampled video frames.
  • the determining unit 102 trains the multi-modal deep learning model based on audio frames and sampled video frames in the following manner: extracts the logarithmic mel spectrum audio signal features of the audio frames, and extracts the logarithmic mel spectrum audio signal features of the sampled video frames.
  • High-dimensional video signal features use multi-layer convolutional neural networks to perform high-dimensional mapping on the logarithmic mel spectrum audio signal features and high-dimensional video signal features, and perform feature fusion on the mapped audio signal features and video signal features. , obtain the fusion features; based on the fusion features, train the multi-modal deep learning model.
  • FIG. 8 is a block diagram of a device 200 for sound effect control according to an exemplary embodiment.
  • the device 200 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like.
  • device 200 may include one or more of the following components: processing component 202, memory 204, power component 206, multimedia component 208, audio component 210, input/output (I/O) interface 212, sensor component 214, and Communication component 216.
  • Processing component 202 generally controls the overall operations of device 200, such as operations associated with display, phone calls, data communications, camera operations, and recording operations.
  • the processing component 202 may include one or more processors 220 to execute instructions to complete all or part of the steps of the above method.
  • processing component 202 may include one or more modules that facilitate interaction between processing component 202 and other components.
  • processing component 202 may include a multimedia module to facilitate interaction between multimedia component 208 and processing component 202.
  • Memory 204 is configured to store various types of data to support operations at device 200 . Examples of such data include instructions for any application or method operating on device 200, contact data, phonebook data, messages, pictures, videos, etc.
  • Memory 204 may be implemented by any type of volatile or non-volatile storage device, or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EEPROM), Programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EEPROM erasable programmable read-only memory
  • EPROM Programmable read-only memory
  • PROM programmable read-only memory
  • ROM read-only memory
  • magnetic memory flash memory, magnetic or optical disk.
  • Power component 206 provides power to various components of device 200 .
  • Power components 206 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power to device 200 .
  • Multimedia component 208 includes a screen that provides an output interface between the device 200 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide action.
  • multimedia component 208 includes a front-facing camera and/or a rear-facing camera.
  • the front camera and/or the rear camera may receive external multimedia data.
  • Each front-facing camera and rear-facing camera can be a fixed optical lens system or have a focal length and optical zoom capabilities.
  • Audio component 210 is configured to output and/or input audio signals.
  • audio component 210 includes a microphone (MIC) configured to receive external audio signals when device 200 is in operating modes, such as call mode, recording mode, and voice recognition mode. The received audio signals may be further stored in memory 204 or sent via communications component 216 .
  • audio component 210 also includes a speaker for outputting audio signals.
  • the I/O interface 212 provides an interface between the processing component 202 and a peripheral interface module, which may be a keyboard, a click wheel, a button, etc. These buttons may include, but are not limited to: Home button, Volume buttons, Start button, and Lock button.
  • Sensor component 214 includes one or more sensors for providing various aspects of status assessment for device 200 .
  • the sensor component 214 can detect the open/closed state of the device 200, the relative positioning of components, such as the display and keypad of the device 200, and the sensor component 214 can also detect a change in position of the device 200 or a component of the device 200. , the presence or absence of user contact with the device 200 , device 200 orientation or acceleration/deceleration and temperature changes of the device 200 .
  • Sensor assembly 214 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
  • Sensor assembly 214 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 214 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • Communication component 216 is configured to facilitate wired or wireless communication between apparatus 200 and other devices.
  • Device 200 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof.
  • the communication component 216 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communications component 216 also includes a near field communications (NFC) module to facilitate short-range communications.
  • NFC near field communications
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • apparatus 200 may be configured by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable Gate array (FPGA), controller, microcontroller, microprocessor or other electronic components are implemented for executing the above method.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable Gate array
  • controller microcontroller, microprocessor or other electronic components are implemented for executing the above method.
  • a non-transitory computer-readable storage medium including instructions such as a memory 204 including instructions, which can be executed by the processor 220 of the device 200 to complete the above method is also provided.
  • the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
  • “plurality” in this disclosure refers to two or more, and other quantifiers are similar.
  • “And/or” describes the relationship between related objects, indicating that there can be three relationships.
  • a and/or B can mean: A exists alone, A and B exist simultaneously, and B exists alone.
  • the character “/” generally indicates that the related objects are in an “or” relationship.
  • the singular forms “a”, “the” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise.
  • first, second, etc. are used to describe various information, but the information should not be limited to these terms. These terms are only used to distinguish information of the same type from each other and do not imply a specific order or importance. In fact, expressions such as “first” and “second” can be used interchangeably.
  • first information may also be called second information, and similarly, the second information may also be called first information.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Quality & Reliability (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

本公开是关于一种音效控制方法、装置及存储介质。其中,音效控制方法包括:获取第一音频信号、第二音频信号以及视频信号,所述第一音频信号为所述终端中待播放视频中的音频信号,所述第二音频信号至少包括所述第一音频信号以及环境音频信号,所述视频信号为所述待播放视频中的视频信号;基于所述第二音频信号以及所述视频信号,确定目标音效控制信息;按照所述目标音效控制信息控制所述终端播放所述第一音频信号的音效。通过本公开的音效控制方法,可以提升智能设备在音效控制方面的环境适应性,使用户获得最佳的视听体验。

Description

一种音效控制方法、装置及存储介质 技术领域
本公开涉及音频处理领域,尤其涉及音效控制方法、装置及存储介质。
背景技术
相关技术中,手机、音响等智能设备采用人为主观选择的方式对音效进行控制,具体为,人为选择音效模式,音效控制器根据人为选择的音效模式调整参数,根据参数调整音频文件设置和麦克风设置,将调整过的音频播放。然而,在实际应用中,人为主观选择进行音效控制,操作较为复杂且音效模式较为单一,导致无法感知音频内容及设备所处环境,不能对播放音效进行有效且便利的智能调整。
发明内容
为克服相关技术中存在的问题,本公开提供一种音效控制方法、装置及存储介质。
根据本公开实施例的第一方面,提供一种音效控制方法,应用于终端,包括:
获取第一音频信号、第二音频信号以及视频信号,所述第一音频信号为所述终端中待播放视频中的音频信号,所述第二音频信号至少包括所述第一音频信号以及环境音频信号,所述视频信号为所述待播放视频中的视频信号;
基于所述第二音频信号以及所述视频信号,确定目标音效控制信息;
按照所述目标音效控制信息控制所述终端播放所述第一音频信号的音效。
在一种实施方式中,基于所述第二音频信号以及视频信号,确定音效控制信息,包括:
将所述第二音频信号和所述视频信号输入至音效控制信息生成模型,所述音效控制信息生成模型基于终端播放的音频训练信号、环境音频训练信号以及终端播放的视频训练信号预先训练得到;
基于所述音效控制信息生成模型的输出结果,确定所述目标音效控制信息。
在一种实施方式中,所述音效控制信息生成模型采用如下方式预先训练:
获取音频训练信号和视频训练信号,所述音频训练信号至少包括终端播放的音频训练信号和环境音频训练信号,所述视频训练信号包括终端播放的视频训练信号;
基于所述音频训练信号、所述视频训练信号以及预设的音频控制信息,对多模态深度学习模型进行训练,直至收敛;
将训练收敛的多模态深度学习模型作为音效控制信息生成模型。
在一种实施方式中,基于所述音频训练信号、所述视频训练信号以及预设的音频控制信息,对多模态深度学习模型进行训练,包括:
对所述音频训练信号进行降噪处理,并将降噪处理后的音频训练信号按照预设帧长切分为等时长音频帧;
将所述获取的视频信号进行预处理,所述预处理为对所述视频训练信号进行最近邻升采样,得到与所述音频帧对齐的采样视频帧;
基于所述音频帧和采样视频帧,对所述多模态深度学习模型进行训练。
在一种实施方式中,所述基于所述音频帧和采样视频帧,对所述多模态深度学习模型进行训练,包括:
提取所述音频帧的对数梅尔谱音频信号特征,并提取所述采样视频帧的高维视频信号特征;
利用多层卷积神经网络,分别对所述对数梅尔谱音频信号特征和所述高维视频信号特征进行高维映射,并对映射后的音频信号特征和视频信号特征进行特征融合,得到融合特征;
基于所述融合特征,对所述多模态深度学习模型进行训练。
根据本公开实施例的第二方面,一种音效控制装置,应用于终端,包括:
获取单元,获取第一音频信号、第二音频信号以及视频信号,所述第一音频信号为所述终端中待播放视频中的音频信号,所述第二音频信号至少包括所述第一音频信号以及环境音频信号,所述视频信号为所述待播放视频中的视频信号;
确定单元,基于所述第二音频信号以及所述视频信号,确定目标音效控制信息;
播放单元,按照所述目标音效控制信息控制所述终端播放所述第一音频信号的音效。
在一种实施方式中,所述确定单元采用如下方式基于所述第二音频信号以及视频信号,确定音效控制信息:
将所述第二音频信号和所述视频信号输入至音效控制信息生成模型,所述音效控制信息生成模型基于终端播放的音频训练信号、环境音频训练信号以及终端播放的视频训练信号预先训练得到;
基于所述音效控制信息生成模型的输出结果,确定所述目标音效控制信息。
在一种实施方式中,所述确定单元的所述音效控制信息生成模型采用如下方式预先训练:
获取音频训练信号和视频训练信号,所述音频训练信号至少包括终端播放的音频训练信号和环境音频训练信号,所述视频训练信号包括终端播放的视频训练信号;
基于所述音频训练信号、所述视频训练信号以及预设的音频控制信息,对多模态深度学习模型进行训练,直至收敛;
将训练收敛的多模态深度学习模型作为音效控制信息生成模型。
在一种实施方式中,所述确定单元采用如下方式基于所述音频训练信号、所述视频训练信号以及预设的音频控制信息,对多模态深度学习模型进行训练:
对所述音频训练信号进行降噪处理,并将降噪处理后的音频训练信号按照预设帧长切分为等时长音频帧;
将所述获取的视频信号进行预处理,所述预处理为对所述视频训练信号进行最近邻升采样,得到与所述音频帧对齐的采样视频帧;
基于所述音频帧和采样视频帧,对所述多模态深度学习模型进行训练。
在一种实施方式中,所述确定单元采用如下方式基于所述音频帧和采样视频帧,对所述多模态深度学习模型进行训练:
提取所述音频帧的对数梅尔谱音频信号特征,并提取所述采样视频帧的高维视频信号特征;
利用多层卷积神经网络,分别对所述对数梅尔谱音频信号特征和所述高维视频信号特征进行高维映射,并对映射后的音频信号特征和视频信号特征进行特征融合,得到融合特征;
基于所述融合特征,对所述多模态深度学习模型进行训练。
根据本公开实施例的第三方面,提供一种音效控制装置,包括:
处理器;
用于存储处理器可执行指令的存储器;
其中,所述处理器被配置为:执行第一方面或第一方面任意一种实施方式中所述的音效控制方法。
根据本公开实施例的第四方面,提供一种非临时性计算机可读存储介质,当所述存储介质中的指令由移动终端的处理器执行时,使得移动终端能够执行执行第一方面或第一方面任意一种实施方式中所述的音效控制方法。
本公开的实施例提供的技术方案可以包括以下有益效果:获取第一音频信号、第二音频信号以及视频信号,第一音频信号为终端中待播放视频中的音频信号,第二音频信号至少包括第一音频信号以及环境音频信号,视频信号为待播放视频中的视频信号,基于第二音频信号以及视频信号,确定目标音效控制信息,按照目标音效控制信息控制所述终端播放所述第一音频信号的音效。本公开实施例提供的音效控制方法能够动态、智能地实现播放音量、声音音调等音频参数的调整,提升智能设备在音效控制方面的环境适应性,使用户获得最佳的视听体验。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。
图1是根据一示例性实施例示出的一种音效控制方法的流程图。
图2是根据一示例性实施例示出的一种确定音效控制信息的方法流程图。
图3是根据一示例性实施例示出的一种音效控制信息生成模型的方法流程图。
图4是根据一示例性实施例示出的一种多模态深度学习模型训练的方法流程图。
图5是根据一示例性实施例示出的一种多模态深度学习模型训练的方法流程图。
图6示出了本公开一示例性实施例示出的一种提取音频帧的对数梅尔谱信号特征的方法流程图。
图7是根据一示例性实施例示出的一种音频控制装置框图。
图8是根据一示例性实施例示出的一种用于音效控制的装置的框图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。
本公开实施例提供的音效控制方法能够应用于手机、平板等智能设备,并动态的根据音频播放内容以及设备所处环境对音效进行智能调整,提升智能设备在音效控制方面的环境适应性,使用户获得更好的视听体验。
相关技术中,控制音效的方法是人为主观选择的方式对音效进行控制,可以控制音频信号在回声、混响、均衡度等方面的效果,其中,人为控制音响的回声处理模块、混响处理模块、均衡处理模块等的参数,或者人为选择预调整好的音效控制效果,音效控制器根据参数调整音频文件设置以及麦克风设置,以达到音频播放时播放的为经过音效处理的音频。
在实际应用中,音效控制方法还存在着研究的进步空间,例如,可以是根据环境智能调整音效,或者根据设备播放环境以及音视频内容智能调整音效。
有鉴于此,本公开实施例提供一种音效控制方法,在该音效控制方法中,设备获取第一音频信号、第二音频信号以及视频信号,第一音频信号为终端待播放视频中的音频信号, 第二音频信号至少包括第一音频信号以及和环境音频信号,视频信号为待播放视频中的视频信号,将音频及视频数据进行特征提取后传输至音效控制信息生成模型,根据音效控制信息生成模型的输出结果,确定目标音效控制信息,根据目标音效控制信息播放音频信号。从而实现根据设备所处环境以及待播放视频的内容智能调整待播放音频的音效,操作简单,并且能对设备所处环境进行实时自适应,使用户获得更好的视听体验。
图1是根据一示例性实施例示出的一种音效控制方法的流程图。如图1所示,音效控制方法应用于终端中,包括以下步骤。
在步骤S11中,获取第一音频信号、第二音频信号以及视频信号,第一音频信号为终端中待播放视频中的音频信号,第二音频信号至少包括第一音频信号以及环境音频信号,视频信号为待播放视频中的视频信号。
在步骤S12中,基于第二音频信号以及视频信号,确定目标音效控制信息。
在步骤S13中,按照目标音效控制信息控制终端播放第一音频信号的音效。
本公开实施例中,需要获取三种信号,分别为第一音频信号、第二音频信号以及视频信号,终端中待播放视频中的音频信号为第一音频信号,第二音频信号至少包括终端中待播放视频中的音频信号以及环境音频信号,即,第二音频信号至少包括第一音频信号以及环境音信号。获取第一音频信号以及第二音频信号的方式可以是,例如,开启设备麦克风获取。视频信号为待播放视频中的视频信号,获取视频信号的方式可以是,例如,终端截取当前播放的视频。
本公开实施例中,目标音效控制信息根据第二音频信号以及视频信号确定,终端根据目标音效控制信息播放第一音频信号的音效,即,终端根据目标音效控制信息控制第一音频信号的音效,其中,目标音效控制信息调整音响的回声处理、混响处理、均衡处理等处理器的系数,控制音频信号在回声、混响、均衡度等方面的效果,目标音效控制信息调整各个音响的播放顺序、时间、速率和强度,使音频产生环绕声、立体声等效果,并进行播放。
本公开实施例中,提供的音效控制方法能够获取设备的环境音频,并将环境音频包含在音效控制信息的因素之中,可以更为智能的调节待播放音频的音效。
进一步地,本公开实施例中,需要确定音效控制信息。
图2是根据一示例性实施例示出的一种确定音效控制信息的方法流程图。如图2所示,基于第二音频信号以及视频信号,确定音效控制信息,包括以下步骤。
在步骤S21中,将第二音频信号和视频信号输入至音效控制信息生成模型,音效控制信息生成模型基于终端播放的音频训练信号、环境音频训练信号以及终端播放的视频训练 信号预先训练得到。
在步骤S22中,基于音效控制信息生成模型的输出结果,确定目标音效控制信息。
本公开实施例中,目标音效控制信息是将第二音频信号和视频信号输入至音效控制信息生成模型获得,该模型输出的输出为目标音效控制信息。其中,音效控制信息生成模型根据终端播放的音频训练信号、环境音频训练信号以及终端播放的视频训练信号预先训练得到。
其中,环境音频训练信号可以包括多种,例如,人声嘈杂的环境训练信号、车水马龙的环境训练信号、施工现场的环境训练信号、电梯中的环境训练信号、安静的环境训练信号等。
例如,在人声嘈杂的环境中,音效控制信息生成模型根据第二音频信号和视频信号输出适应人声嘈杂环境的目标音效控制信息,从而获得目标音效控制信息。
本公开实施例中,目标音效控制信息是能够动态且智能调整的,从而使得用户的使用舒适度更高。
进一步地,本公开实施例中,音效控制信息生成模型需要进行预先训练。
图3是根据一示例性实施例示出的一种音效控制信息生成模型的方法流程图。如图3所示,音效控制信息生成模型的预先训练包括以下步骤。
在步骤S31中,获取音频训练信号和视频训练信号,音频训练信号至少包括终端播放的音频训练信号和环境音频训练信号,视频训练信号包括终端播放的视频训练信号。
在步骤S32中,基于音频训练信号、视频训练信号以及预设的音频控制信息,对多模态深度学习模型进行训练,直至收敛。
在步骤S33中,将训练收敛的多模态深度学习模型作为音效控制信息生成模型。
本公开实施例中,音效控制信息生成模型是经过预先训练的,预先训练音效控制信息生成模型需要获取音频训练信号和视频训练信号,其中,音频训练信号至少包括终端播放的音频训练信号和环境音频训练信号,视频训练信号包括终端播放的视频训练信号。根据音频训练信号、视频训练信号以及预设的音频控制信息,对多模态深度学习模型进行训练,直至收敛,将训练收敛的多模态深度学习模型作为音效控制信息生成模型。
本公开实施例中,提供的音效控制方法能够实现对音效的实时控制和处理,使用户的使用感受良好。
进一步地,本公开实施例中,需要对多模态深度学习模型进行训练。
图4是根据一示例性实施例示出的一种多模态深度学习模型训练的方法流程图。如图4所示,基于音频训练信号、视频训练信号以及预设的音频控制信息,对多模态深度学习 模型进行训练,包括以下步骤。
在步骤S41中,对音频训练信号进行降噪处理,并将降噪处理后的音频训练信号按照预设帧长切分为等时长音频帧。
本公开实施例中,对音频训练信号进行降噪处理,其中,降噪处理包括将音频训练信号输入至自适应滤波器,该自适应滤波器可以采用FIR滤波器和时域自适应滤波方法进行设计,将降噪后的音频训练信号均匀的切分为多个等时长的音频帧。其中,切分为等时长的音频帧,可以例如是3秒的等时长的音频帧,若切分的时长大于3秒,则用户的听感更好,若切分的时长小于3秒,则音频训练信号的识别率更高。
在步骤S42中,将获取的视频信号进行预处理,预处理为对视频训练信号进行最近邻升采样,得到与音频帧对齐的采样视频帧。
本公开实施例中,视频训练信号的获取方式可以是,例如,通过终端传输或者在终端安装摄像头录制的视频信号。对获取到的视频信号进行预处理,预处理为对视频信号进行最近邻升采样,得到与音频帧对齐的采样视频帧,其中,最近邻升采样即对视频训练信号中相邻时刻的图像信号进行复制,直至视频训练信号帧数与音频训练信号帧数相等。
在步骤S43中,基于音频帧和采样视频帧,对多模态深度学习模型进行训练。
本公开实施例中,多模态深度学习模型是根据音频帧和采样视频帧进行训练的。
本公开实施例中,提供的多模态深度学习模型能够实现动态的处理各种场景下音频播放的音效调节。
进一步地,本公开实施例中,需要对多模态深度学习模型进行进一步的训练。
图5是根据一示例性实施例示出的一种多模态深度学习模型训练的方法流程图。如图5所示,基于音频帧和采样视频帧,对多模态深度学习模型进行训练,包括以下步骤。
在步骤S51中,提取音频帧的对数梅尔谱音频信号特征,并提取采样视频帧的高维视频信号特征。
本公开实施例中,需要提取音频帧的对数梅尔谱音频信号特征,图6示出了本公开一示例性实施例示出的一种提取音频帧的对数梅尔谱信号特征的方法流程图,参阅图6,对预处理后的音频训练信号进行加窗处理,即,音频训练信号S_pre与窗函数f_win相乘,即S_win=S_pre*f_win,对经过加窗处理的信号进行快速傅里叶变换得到音频频域信号S_fre,进一步计算音频频域信号S_fre的幅度谱S_pow,即S_pow=abs(S_fre),设计一组总数为k的梅尔滤波器h_mel,其中,第m个滤波器H m的频域计算公式为:
Figure PCTCN2022096053-appb-000001
其中,上述总数为k的梅尔滤波器,k的最小值为0,最大值不超过音频训练信号的采样点数,且k的最大值与所处终端有关。
接述上例,幅度谱S_pow与梅尔滤波器进行卷积并计算其对数结果,得到对数梅尔谱特征,计算公式为:
Figure PCTCN2022096053-appb-000002
其中
Figure PCTCN2022096053-appb-000003
为卷积运算符。
本公开实施例中,提取采样视频帧的高维视频信号特征,具体为,利用深度学习网络对采样视频帧提取为高维视频信号特征。
在步骤S52中,利用多层卷积神经网络,分别对对数梅尔谱音频信号特征和高维视频信号特征进行高维映射,并对映射后的音频信号特征和视频信号特征进行特征融合,得到融合特征。
本公开实施例中,利用多层卷积神经网络,对对数梅尔谱音频信号特征和高维视频信号特征进行高维映射,映射到更高维度的特征,并对映射后的音频信号特征和视频信号特征进行特征融合,其中,特征融合方式可以是通过BLSTM(Bi-directional Long Short Term Memory,双向长短时记忆网络)进行特征融合,得到融合特征。
在步骤S53中,基于融合特征,对多模态深度学习模型进行训练。
本公开实施例中,多模态深度学习模型根据融合特征进行训练,融合特征中包括映射后的音频信号特征以及视频信号特征。
本公开实施例中,对多模态深度学习模型的进一步训练,可以更好的根据视频播放内容调整音频控制信息的生成,使经过音效控制方法控制的音频更好的符合视频播放内容。
需要说明的是,本领域内技术人员可以理解,本公开实施例上述涉及的各种实施方式/实施例中可以配合前述的实施例使用,也可以是独立使用。无论是单独使用还是配合前述的实施例一起使用,其实现原理类似。本公开实施中,部分实施例中是以一起使用的实施方式进行说明的。当然,本领域内技术人员可以理解,这样的举例说明并非对本公开实施例的限定。
基于相同的构思,本公开实施例还提供一种音频控制装置。
可以理解的是,本公开实施例提供的音频控制装置为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。结合本公开实施例中所公开的各示例的单元及算法步骤,本公开实施例能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟 以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。本领域技术人员可以对每个特定的应用来使用不同的方法来实现所描述的功能,但是这种实现不应认为超出本公开实施例的技术方案的范围。
图7是根据一示例性实施例示出的一种音频控制装置框图。参照图7,该音频控制装置100包括获取单元101,确定单元102和播放单元103。
获取单元101,获取第一音频信号、第二音频信号以及视频信号,第一音频信号为终端中待播放视频中的音频信号,第二音频信号至少包括第一音频信号以及环境音频信号,视频信号为待播放视频中的视频信号;
确定单元102,基于第二音频信号以及视频信号,确定目标音效控制信息;
播放单元103,按照目标音效控制信息控制终端播放第一音频信号的音效。
在一种实施方式中,确定单元102采用如下方式基于第二音频信号以及视频信号,确定音效控制信息:将第二音频信号和视频信号输入至音效控制信息生成模型,音效控制信息生成模型基于终端播放的音频训练信号、环境音频训练信号以及终端播放的视频训练信号预先训练得到;基于音效控制信息生成模型的输出结果,确定目标音效控制信息。
在一种实施方式中,确定单元102的音效控制信息生成模型采用如下方式预先训练:
获取音频训练信号和视频训练信号,音频训练信号至少包括终端播放的音频训练信号和环境音频训练信号,视频训练信号包括终端播放的视频训练信号;基于音频训练信号、视频训练信号以及预设的音频控制信息,对多模态深度学习模型进行训练,直至收敛;将训练收敛的多模态深度学习模型作为音效控制信息生成模型。
在一种实施方式中,确定单元102采用如下方式基于音频训练信号、视频训练信号以及预设的音频控制信息,对多模态深度学习模型进行训练:对音频训练信号进行降噪处理,并将降噪处理后的音频训练信号按照预设帧长切分为等时长音频帧;将获取的视频信号进行预处理,预处理为对视频训练信号进行最近邻升采样,得到与音频帧对齐的采样视频帧;基于音频帧和采样视频帧,对多模态深度学习模型进行训练。
在一种实施方式中,确定单元102采用如下方式基于音频帧和采样视频帧,对多模态深度学习模型进行训练:提取音频帧的对数梅尔谱音频信号特征,并提取采样视频帧的高维视频信号特征;利用多层卷积神经网络,分别对对数梅尔谱音频信号特征和高维视频信号特征进行高维映射,并对映射后的音频信号特征和视频信号特征进行特征融合,得到融合特征;基于融合特征,对多模态深度学习模型进行训练。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
图8是根据一示例性实施例示出的一种用于音效控制的装置200的框图。例如,装置200可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等。
参照图8,装置200可以包括以下一个或多个组件:处理组件202,存储器204,电力组件206,多媒体组件208,音频组件210,输入/输出(I/O)接口212,传感器组件214,以及通信组件216。
处理组件202通常控制装置200的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件202可以包括一个或多个处理器220来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件202可以包括一个或多个模块,便于处理组件202和其他组件之间的交互。例如,处理组件202可以包括多媒体模块,以方便多媒体组件208和处理组件202之间的交互。
存储器204被配置为存储各种类型的数据以支持在装置200的操作。这些数据的示例包括用于在装置200上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器204可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电力组件206为装置200的各种组件提供电力。电力组件206可以包括电源管理系统,一个或多个电源,及其他与为装置200生成、管理和分配电力相关联的组件。
多媒体组件208包括在所述装置200和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件208包括一个前置摄像头和/或后置摄像头。当装置200处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件210被配置为输出和/或输入音频信号。例如,音频组件210包括一个麦克风(MIC),当装置200处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器204或经由通信组件216发送。在一些实施例中,音频组件210还包括一个扬声器,用于输出音频信号。
I/O接口212为处理组件202和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件214包括一个或多个传感器,用于为装置200提供各个方面的状态评估。例如,传感器组件214可以检测到装置200的打开/关闭状态,组件的相对定位,例如所述组件为装置200的显示器和小键盘,传感器组件214还可以检测装置200或装置200一个组件的位置改变,用户与装置200接触的存在或不存在,装置200方位或加速/减速和装置200的温度变化。传感器组件214可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件214还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件214还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件216被配置为便于装置200和其他设备之间有线或无线方式的通信。装置200可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个示例性实施例中,通信组件216经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件216还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,装置200可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。
在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器204,上述指令可由装置200的处理器220执行以完成上述方法。例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
进一步可以理解的是,本公开中“多个”是指两个或两个以上,其它量词与之类似。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。
进一步可以理解的是,术语“第一”、“第二”等用于描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开,并不表示特定的顺序或者 重要程度。实际上,“第一”、“第二”等表述完全可以互换使用。例如,在不脱离本公开范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。
进一步可以理解的是,本公开实施例中尽管在附图中以特定的顺序描述操作,但是不应将其理解为要求按照所示的特定顺序或是串行顺序来执行这些操作,或是要求执行全部所示的操作以得到期望的结果。在特定环境中,多任务和并行处理可能是有利的。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本公开的其它实施方案。本申请旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利范围来限制。

Claims (12)

  1. 一种音效控制方法,其特征在于,应用于终端,包括:
    获取第一音频信号、第二音频信号以及视频信号,所述第一音频信号为所述终端中待播放视频中的音频信号,所述第二音频信号至少包括所述第一音频信号以及环境音频信号,所述视频信号为所述待播放视频中的视频信号;
    基于所述第二音频信号以及所述视频信号,确定目标音效控制信息;
    按照所述目标音效控制信息控制所述终端播放所述第一音频信号的音效。
  2. 根据权利要求1所述的方法,其特征在于,基于所述第二音频信号以及视频信号,确定音效控制信息,包括:
    将所述第二音频信号和所述视频信号输入至音效控制信息生成模型,所述音效控制信息生成模型基于终端播放的音频训练信号、环境音频训练信号以及终端播放的视频训练信号预先训练得到;
    基于所述音效控制信息生成模型的输出结果,确定所述目标音效控制信息。
  3. 根据权利要求2所述的方法,其特征在于,所述音效控制信息生成模型采用如下方式预先训练:
    获取音频训练信号和视频训练信号,所述音频训练信号至少包括终端播放的音频训练信号和环境音频训练信号,所述视频训练信号包括终端播放的视频训练信号;
    基于所述音频训练信号、所述视频训练信号以及预设的音频控制信息,对多模态深度学习模型进行训练,直至收敛;
    将训练收敛的多模态深度学习模型作为音效控制信息生成模型。
  4. 根据权利要求3所述的方法,其特征在于,基于所述音频训练信号、所述视频训练信号以及预设的音频控制信息,对多模态深度学习模型进行训练,包括:
    对所述音频训练信号进行降噪处理,并将降噪处理后的音频训练信号按照预设帧长切分为等时长音频帧;
    将所述获取的视频信号进行预处理,所述预处理为对所述视频训练信号进行最近邻升采样,得到与所述音频帧对齐的采样视频帧;
    基于所述音频帧和采样视频帧,对所述多模态深度学习模型进行训练。
  5. 根据权利要求4所述的方法,其特征在于,所述基于所述音频帧和采样视频帧,对所述多模态深度学习模型进行训练,包括:
    提取所述音频帧的对数梅尔谱音频信号特征,并提取所述采样视频帧的高维视频信号 特征;
    利用多层卷积神经网络,分别对所述对数梅尔谱音频信号特征和所述高维视频信号特征进行高维映射,并对映射后的音频信号特征和视频信号特征进行特征融合,得到融合特征;
    基于所述融合特征,对所述多模态深度学习模型进行训练。
  6. 一种音效控制装置,其特征在于,应用于终端,包括:
    获取单元,获取第一音频信号、第二音频信号以及视频信号,所述第一音频信号为所述终端中待播放视频中的音频信号,所述第二音频信号至少包括所述第一音频信号以及环境音频信号,所述视频信号为所述待播放视频中的视频信号;
    确定单元,基于所述第二音频信号以及所述视频信号,确定目标音效控制信息;
    播放单元,按照所述目标音效控制信息控制所述终端播放所述第一音频信号的音效。
  7. 根据权利要求6所述的装置,其特征在于,所述确定单元采用如下方式基于所述第二音频信号以及视频信号,确定音效控制信息:
    将所述第二音频信号和所述视频信号输入至音效控制信息生成模型,所述音效控制信息生成模型基于终端播放的音频训练信号、环境音频训练信号以及终端播放的视频训练信号预先训练得到;
    基于所述音效控制信息生成模型的输出结果,确定所述目标音效控制信息。
  8. 根据权利要求7所述的装置,其特征在于,所述确定单元的所述音效控制信息生成模型采用如下方式预先训练:
    获取音频训练信号和视频训练信号,所述音频训练信号至少包括终端播放的音频训练信号和环境音频训练信号,所述视频训练信号包括终端播放的视频训练信号;
    基于所述音频训练信号、所述视频训练信号以及预设的音频控制信息,对多模态深度学习模型进行训练,直至收敛;
    将训练收敛的多模态深度学习模型作为音效控制信息生成模型。
  9. 根据权利要求8所述的装置,其特征在于,所述确定单元采用如下方式基于所述音频训练信号、所述视频训练信号以及预设的音频控制信息,对多模态深度学习模型进行训练:
    对所述音频训练信号进行降噪处理,并将降噪处理后的音频训练信号按照预设帧长切分为等时长音频帧;
    将所述获取的视频信号进行预处理,所述预处理为对所述视频训练信号进行最近邻升采样,得到与所述音频帧对齐的采样视频帧;
    基于所述音频帧和采样视频帧,对所述多模态深度学习模型进行训练。
  10. 根据权利要求9所述的装置,其特征在于,所述确定单元采用如下方式基于所述音频帧和采样视频帧,对所述多模态深度学习模型进行训练:
    提取所述音频帧的对数梅尔谱音频信号特征,并提取所述采样视频帧的高维视频信号特征;
    利用多层卷积神经网络,分别对所述对数梅尔谱音频信号特征和所述高维视频信号特征进行高维映射,并对映射后的音频信号特征和视频信号特征进行特征融合,得到融合特征;
    基于所述融合特征,对所述多模态深度学习模型进行训练。
  11. 一种音效控制装置,其特征在于,包括:
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器被配置为:执行权利要求1至5中任意一项所述的音效控制方法。
  12. 一种非临时性计算机可读存储介质,当所述存储介质中的指令由移动终端的处理器执行时,使得移动终端能够执行权利要求1至5中任意一项所述的音效控制方法。
PCT/CN2022/096053 2022-05-30 2022-05-30 一种音效控制方法、装置及存储介质 WO2023230782A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2022/096053 WO2023230782A1 (zh) 2022-05-30 2022-05-30 一种音效控制方法、装置及存储介质
CN202280004323.0A CN117501363A (zh) 2022-05-30 2022-05-30 一种音效控制方法、装置及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/096053 WO2023230782A1 (zh) 2022-05-30 2022-05-30 一种音效控制方法、装置及存储介质

Publications (1)

Publication Number Publication Date
WO2023230782A1 true WO2023230782A1 (zh) 2023-12-07

Family

ID=89026613

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/096053 WO2023230782A1 (zh) 2022-05-30 2022-05-30 一种音效控制方法、装置及存储介质

Country Status (2)

Country Link
CN (1) CN117501363A (zh)
WO (1) WO2023230782A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109286772A (zh) * 2018-09-04 2019-01-29 Oppo广东移动通信有限公司 音效调整方法、装置、电子设备以及存储介质
CN111246283A (zh) * 2020-01-17 2020-06-05 北京达佳互联信息技术有限公司 视频播放方法、装置、电子设备及存储介质
US20200288255A1 (en) * 2019-03-08 2020-09-10 Lg Electronics Inc. Method and apparatus for sound object following
CN113129917A (zh) * 2020-01-15 2021-07-16 荣耀终端有限公司 基于场景识别的语音处理方法及其装置、介质和系统
US20210319321A1 (en) * 2020-04-14 2021-10-14 Sony Interactive Entertainment Inc. Self-supervised ai-assisted sound effect recommendation for silent video
CN113793623A (zh) * 2021-08-17 2021-12-14 咪咕音乐有限公司 音效设置方法、装置、设备以及计算机可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109286772A (zh) * 2018-09-04 2019-01-29 Oppo广东移动通信有限公司 音效调整方法、装置、电子设备以及存储介质
US20200288255A1 (en) * 2019-03-08 2020-09-10 Lg Electronics Inc. Method and apparatus for sound object following
CN113129917A (zh) * 2020-01-15 2021-07-16 荣耀终端有限公司 基于场景识别的语音处理方法及其装置、介质和系统
CN111246283A (zh) * 2020-01-17 2020-06-05 北京达佳互联信息技术有限公司 视频播放方法、装置、电子设备及存储介质
US20210319321A1 (en) * 2020-04-14 2021-10-14 Sony Interactive Entertainment Inc. Self-supervised ai-assisted sound effect recommendation for silent video
CN113793623A (zh) * 2021-08-17 2021-12-14 咪咕音乐有限公司 音效设置方法、装置、设备以及计算机可读存储介质

Also Published As

Publication number Publication date
CN117501363A (zh) 2024-02-02

Similar Documents

Publication Publication Date Title
KR102312124B1 (ko) 향상된 오디오를 갖는 디바이스
WO2020168873A1 (zh) 语音处理方法、装置、电子设备及存储介质
EP3163748B1 (en) Method, device and terminal for adjusting volume
CN104991754B (zh) 录音方法及装置
WO2016176951A1 (zh) 声音信号优化方法及装置
CN109410973B (zh) 变声处理方法、装置和计算机可读存储介质
CN107871494B (zh) 一种语音合成的方法、装置及电子设备
CN115482830B (zh) 语音增强方法及相关设备
CN110853664A (zh) 评估语音增强算法性能的方法及装置、电子设备
US20240096343A1 (en) Voice quality enhancement method and related device
CN108845787A (zh) 音频调节的方法、装置、终端及存储介质
CN106782625B (zh) 音频处理方法和装置
EP4050601B1 (en) Method and apparatus for audio processing, terminal and storage medium
WO2023231686A9 (zh) 一种视频处理方法和终端
WO2023230782A1 (zh) 一种音效控制方法、装置及存储介质
CN112201267A (zh) 一种音频处理方法、装置、电子设备及存储介质
CN111988704A (zh) 声音信号处理方法、装置以及存储介质
US11682412B2 (en) Information processing method, electronic equipment, and storage medium
CN113810828A (zh) 音频信号处理方法、装置、可读存储介质及耳机
CN111667842B (zh) 音频信号处理方法及装置
CN114095817A (zh) 耳机的降噪方法、装置、耳机及存储介质
CN111736798A (zh) 音量调节方法、音量调节装置及计算机可读存储介质
TWI687917B (zh) 語音系統及聲音偵測方法
WO2023240887A1 (zh) 去混响方法、装置、设备及存储介质
CN113825082B (zh) 一种用于缓解助听延迟的方法及装置

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 202280004323.0

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22944124

Country of ref document: EP

Kind code of ref document: A1