CN112423009A - Method and equipment for controlling live broadcast audio - Google Patents

Method and equipment for controlling live broadcast audio Download PDF

Info

Publication number
CN112423009A
CN112423009A CN202011241301.5A CN202011241301A CN112423009A CN 112423009 A CN112423009 A CN 112423009A CN 202011241301 A CN202011241301 A CN 202011241301A CN 112423009 A CN112423009 A CN 112423009A
Authority
CN
China
Prior art keywords
audio signal
sound
live broadcast
mixing
audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011241301.5A
Other languages
Chinese (zh)
Inventor
邓琼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gree Electric Appliances Inc of Zhuhai
Original Assignee
Gree Electric Appliances Inc of Zhuhai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gree Electric Appliances Inc of Zhuhai filed Critical Gree Electric Appliances Inc of Zhuhai
Priority to CN202011241301.5A priority Critical patent/CN112423009A/en
Publication of CN112423009A publication Critical patent/CN112423009A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/233Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42203Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] sound input device, e.g. microphone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8106Monomedia components thereof involving special audio data, e.g. different tracks for different languages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8166Monomedia components thereof involving executable data, e.g. software

Abstract

The application provides a method and equipment for controlling live broadcast audio, which are used for improving the richness of audio in the live broadcast process. In the method, a live broadcast sound mixing instruction is responded, and a first audio signal collected by a microphone and a second audio signal output by multimedia software are mixed; and sending the audio signal after mixing processing to the currently running live broadcast software. The audio signal of live broadcast in-process this moment contains the second audio signal of the first audio signal and multimedia software output that gather through the microphone, can acquire the accompaniment of live broadcast process through multimedia software, has richened the selection of live broadcast in-process accompaniment, has improved the richness of live broadcast in-process audio frequency.

Description

Method and equipment for controlling live broadcast audio
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for controlling live audio.
Background
With the development of internet technology, video and audio have entered into the life of users, such as the currently popular live broadcast industry. And the life of the user is richer through various live broadcast, such as entertainment live broadcast of music live broadcast, fun live broadcast, game live broadcast and the like, or shopping live broadcast of e-commerce live broadcast and the like.
At present, in the live broadcast process, in order to make the live broadcast not monotonous, the accompaniment is usually increased in the live broadcast process, but the accompaniment can only be the music of the live broadcast software itself, and the audio signals of other software such as other music players, video players or reading players can not be used as the accompaniment in the live broadcast software. Therefore, the audio in the current live broadcasting process is not rich enough.
Disclosure of Invention
The embodiment of the application provides a method for controlling live broadcast audio, which is used for improving the richness of the audio in the live broadcast process.
In a first aspect, an embodiment of the present application provides a method for controlling live audio, where the method includes:
responding to a live sound mixing instruction, and mixing a first audio signal collected by a microphone and a second audio signal output by multimedia software;
and sending the audio signal after mixing processing to the currently running live broadcast software.
In this application, when receiving live broadcast sound mixing instruction, the second audio signal that will pass through the first audio signal of microphone collection and multimedia software output carries out the hybrid processing back, audio signal after will mixing processing sends the live broadcast software of current operation, the audio signal of live broadcast in-process this moment includes the second audio signal of the first audio signal of gathering through the microphone and multimedia software output, can obtain the accompaniment of live broadcast process through multimedia software, the selection of live broadcast in-process accompaniment has been enriched.
In one possible implementation manner, when the first audio signal collected by the microphone and the second audio signal output by the multimedia software are mixed and processed in response to the live sound mixing instruction:
if the live broadcast sound mixing instruction comprises a sound change instruction, mixing the first audio signal with the second audio signal after sound change processing; or
And if the live broadcast sound mixing instruction does not comprise the sound change instruction, directly mixing the first audio signal and the second audio signal.
In the application, in order to improve the sound effect in the live broadcasting process, the live broadcasting sound mixing instruction can include a sound change instruction, when the sound change instruction is included, the sound change processing can be performed on the first audio signal acquired from the microphone, so that the sound effect is enriched, and when the sound change processing is performed, the first audio signal and the second audio signal after the sound change processing are mixed; and when the live sound mixing instruction does not comprise the sound change instruction, directly mixing the first audio signal and the second audio signal.
In one possible implementation manner, in response to a live sound mixing instruction, mixing a first audio signal collected by a microphone and a second audio signal output by multimedia software, including:
if the first audio signal is not acquired and a second audio signal exists, directly transmitting the second audio signal to currently running live broadcast software; or
If the first audio signal is acquired, the second audio signal does not exist, and the live broadcast sound mixing instruction does not include the sound change instruction, directly sending the first audio signal to currently-operated live broadcast software; or
And if the first audio signal is acquired, the live broadcast sound mixing instruction comprises a sound change instruction, and no second audio signal exists, the first audio signal is subjected to sound change processing and then directly sent to currently running live broadcast software.
In the present application, an implementation of how to perform processing when only one audio signal is included when performing mixing processing on a first audio signal and a second audio signal is further given.
In one possible implementation, the live sound mixing instruction includes a sound change instruction, and the sound change processing of the first audio signal includes:
determining a target sound type indicated by the sound change instruction according to the sound change instruction;
determining an audio parameter corresponding to the target sound type according to the corresponding relation between the preset sound type and the audio parameter;
and modifying the audio parameters in the first audio signal into audio parameters corresponding to the target sound type.
In the application, an implementation manner of the sound change processing is provided, so that the sound change processing is accurately performed on the first audio signal collected from the microphone according to the received sound change instruction.
In one possible implementation manner, mixing the first audio signal collected by the microphone and the second audio signal output by the multimedia software includes:
converting the first audio signal into a first digital audio signal and converting the second audio signal into a second digital audio signal;
and adding the first data audio signal and the second digital audio signal in a time domain to obtain a mixed digital audio signal.
In the present application, an implementation of a hybrid process is presented.
In a second aspect, an embodiment of the present application provides an apparatus for controlling live audio, where the apparatus includes: a CPU (Central Processing Unit), a DSP (Digital Signal Processing), and a microphone, wherein:
the CPU is used for responding to the live broadcast sound mixing instruction and forwarding the live broadcast sound mixing instruction to the DSP;
and the DSP is used for responding to the live broadcast sound mixing instruction, mixing the first audio signal collected by the microphone and the second audio signal output by the multimedia software, returning the mixed audio signal to the CPU, and sending the mixed audio signal to the currently running live broadcast software through the CPU.
In one possible implementation, the DSP is specifically configured to:
if the live broadcast sound mixing instruction comprises a sound change instruction, mixing the first audio signal with the second audio signal after sound change processing; or
And if the live broadcast sound mixing instruction does not comprise the sound change instruction, directly mixing the first audio signal and the second audio signal.
In one possible implementation, the DSP is specifically configured to:
if the first audio signal is not acquired and a second audio signal exists, directly transmitting the second audio signal to currently running live broadcast software; or
If the first audio signal is acquired, the live broadcast sound mixing instruction does not comprise a sound change instruction, and a second audio signal does not exist, the first audio signal is directly sent to currently running live broadcast software; or
And if the first audio signal is acquired, the live broadcast sound mixing instruction comprises a sound change instruction, and no second audio signal exists, the first audio signal is subjected to sound change processing and then sent to currently running live broadcast software.
In one possible implementation, the DSP is further configured to:
determining a target sound type indicated by the sound change instruction according to the sound change instruction;
determining an audio parameter corresponding to the target sound type according to the corresponding relation between the preset sound type and the audio parameter;
and modifying the audio parameters in the first audio signal into audio parameters corresponding to the target sound type.
In one possible implementation, the DSP is specifically configured to:
converting the first audio signal into a first digital audio signal and converting the second audio signal into a second digital audio signal;
and adding the first data audio signal and the second digital audio signal in a time domain to obtain a mixed digital audio signal.
In a third aspect, an embodiment of the present application provides an apparatus for controlling live audio, where the apparatus includes:
the processing module is used for responding to a live sound mixing instruction and mixing a first audio signal acquired by a microphone and a second audio signal output by the multimedia software;
and the sending module is used for sending the audio signals after the mixing processing to the currently running live broadcast software.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where computer instructions are stored, and when executed by a processor, the computer instructions implement the method for controlling live audio provided in the embodiment of the present application.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of a software scenario provided in an embodiment of the present application;
fig. 2 is a structural diagram of a terminal device in live broadcast provided in an embodiment of the present application;
fig. 3 is a flowchart of a method for controlling live audio according to an embodiment of the present application;
fig. 4 is a software interface diagram for triggering a live sound mixing instruction according to an embodiment of the present application;
fig. 5 is a schematic frame diagram of a live control audio according to an embodiment of the present application;
fig. 6 is a schematic circuit diagram of a live control audio according to an embodiment of the present disclosure;
fig. 7 is a flowchart of an overall method for live controlling audio according to an embodiment of the present disclosure;
fig. 8 is a structural diagram of a device for live controlling audio according to an embodiment of the present application;
fig. 9 is a structural diagram of an apparatus for live controlling audio according to an embodiment of the present application.
Detailed Description
The architecture and the service scenario described in the embodiment of the present application are for more clearly illustrating the technical solution of the embodiment of the present application, and do not form a limitation on the technical solution provided in the embodiment of the present application, and it can be known by a person skilled in the art that with the occurrence of a new service scenario, the technical solution provided in the embodiment of the present application is also applicable to similar technical problems.
In the description of the embodiments of the present application, "/" means "or" unless otherwise specified, for example, a/B may mean a or B; "and/or" in the text is only an association relationship describing an associated object, and means that three relationships may exist, for example, a and/or B may mean: three cases of a alone, a and B both, and B alone exist, and in addition, "a plurality" means two or more than two in the description of the embodiments of the present application.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as implying or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature, and in the description of embodiments of the application, unless stated otherwise, "plurality" means two or more.
For the sake of understanding, the design concept of the present application will be briefly described below.
With the development of the internet, the live broadcast industry has deepened into the life of users, such as live broadcast of fun, live broadcast of music, live broadcast of e-commerce and the like. In the live broadcast, terminal devices such as mobile phones and computers are generally adopted, and with the increasing strength of the functions of the mobile phones, the live broadcast through the mobile phones occupies a large share in the live broadcast industry.
Taking live e-commerce as an example, the anchor starts live broadcast through live broadcast software on a mobile phone, and sells own commodities in the live broadcast process. Generally, the anchor needs to continuously introduce the characteristics, performance and the like of the goods during the process of selling the goods of the own home to arouse the purchasing desire of the viewers. In the live broadcasting process, some anchor broadcasters are not monotonous for live broadcasting, music accompaniment is usually added in the live broadcasting process, but the music accompaniment intelligence uses the music carried in the live broadcasting software, and audio signals provided by third-party software cannot be used, for example, music played in other music players cannot be used. And the anchor can only use the own original sound in the live broadcast process, and live broadcast software cannot perform redundant processing on the sound of the anchor.
In summary, the audio in the existing live broadcasting process includes the original sound of the main broadcast and/or the music carried by the live broadcasting software, so that the live broadcasting audio effect is not rich enough.
In view of this, embodiments of the present application provide a method and an apparatus for controlling live broadcast audio, so that an audio signal output by any multimedia software can be used as an accompaniment in a live broadcast process, selection of the accompaniment in the live broadcast process is enriched, and sound change processing is performed on sound of a main broadcast, so that a sound effect of the main broadcast is better and more interesting.
After introducing the design idea of the embodiment of the present application, a brief description of a software scenario set by the present application is provided below. The following scenarios are only used to illustrate the embodiments of the present application and are not limiting. In specific implementation, the technical scheme provided by the embodiment of the application can be flexibly implemented according to actual needs.
Fig. 1 is a schematic diagram of a software scenario provided in the embodiment of the present application. At least one terminal device 10 is included in the scenario. The terminal device 10 is an electronic device used by a user, and the electronic device may be a computer device having a certain computing capability and running instant messaging software and a website or social contact software and a website, such as a personal computer, a mobile phone, a tablet computer, a notebook, an e-book reader, and the like.
A hardware configuration block diagram of the terminal device 10 is exemplarily shown in fig. 2. As shown in fig. 2, the terminal device 10 may include a tuner demodulator 210, a communicator 220, a detector 230, an external device interface 240, a controller 250, a memory 260, a user interface 265, a video processor 270, a display 275, an audio processor 280, an audio output interface 285, and a power supply 290.
The tuning demodulator 210 receives signals in a wired or wireless manner, may perform modulation and demodulation processing such as amplification, mixing, resonance, and the like, and is configured to demodulate an audio/video signal carried in live broadcast watched by a user from a plurality of wireless or wired signals.
The tuner demodulator 210 can receive signals according to different broadcasting systems, such as: terrestrial broadcasting, cable broadcasting, satellite broadcasting, internet broadcasting, or the like; and according to different modulation types, a digital modulation mode or an analog modulation mode can be adopted; and the analog signal and the digital signal can be demodulated according to the different types of the received signals.
The communicator 220 is a component for communicating with an external device or an external server according to various communication protocol types. For example, the terminal device 10 transmits data to an external device connected via the communicator 220, or browses and downloads video data from an external device connected via the communicator 220. The communicator 220 may include a network communication protocol module or a near field communication protocol module, such as a WIFI module 221, a bluetooth communication protocol module 222, and a wired ethernet communication protocol module 223, so that the communicator 220 may receive a control signal of the control device 100 according to the control of the controller 250 and implement the control signal as a WIFI signal, a bluetooth signal, a radio frequency signal, and the like.
The detector 230 is a component of the terminal device 10 for collecting an external environment or a signal interacting with the outside. The detector 230 may include a sound collector 231, such as a microphone, which may be used to receive the sound of the user, such as a voice signal of a control instruction of the user to control the terminal device 10; alternatively, an ambient sound for identifying the type of the ambient scene may be collected, enabling the terminal device 10 to adapt to the ambient noise.
In some other exemplary embodiments, the detector 230 may further include an image collector 232, such as a camera, a video camera, etc., which may be used to collect the external environment scene; and for capturing video taken by the user.
In some other exemplary embodiments, the detector 230 may further include a light receiver for collecting the ambient light intensity to adapt to the display parameter variation of the terminal device 10.
In some other exemplary embodiments, the detector 230 may further include a temperature sensor, such as by sensing an ambient temperature, and the terminal device 10 may adaptively adjust a display color temperature of the image. For example, when the temperature is higher, the terminal device 10 may be adjusted to display a color temperature of the image which is cooler; when the temperature is lower, the terminal device 10 can be adjusted to display the image with warmer color temperature.
The external device interface 240 is a component for providing the controller 250 to control data transmission between the terminal apparatus 10 and an external apparatus. The external device interface 240 may be connected to an external apparatus in a wired/wireless manner, and may receive data such as a video signal and an audio signal from the external apparatus.
The external device interface 240 may include: a High Definition Multimedia Interface (HDMI) terminal 241, a Composite Video Blanking Sync (CVBS) terminal 242, an analog or digital Component terminal 243, a Universal Serial Bus (USB) terminal 244, a Component terminal (not shown), a red, green, blue (RGB) terminal (not shown), and the like.
The controller 250 controls the operation of the terminal device 10 and responds to the operation of a user by executing various software control programs (such as an operating system and various software programs) stored in the memory 260.
As shown in fig. 2, the controller 250 includes a Random Access Memory (RAM)251, a Read Only Memory (ROM)252, a graphics processor 253, a CPU processor 254, a communication interface 255, and a communication bus 256. The RAM251, the ROM252, the graphic processor 253, and the CPU processor 254 are connected to each other through a communication bus 256 through a communication interface 255.
The ROM252 stores various system boot instructions. If the terminal device 10 starts the power-on upon receiving the power-on signal, the CPU processor 254 executes the system boot instruction in the ROM252, and copies the operating system stored in the memory 260 to the RAM251 to start the boot of the operating system. After the start of the operating system is completed, the CPU processor 254 copies the various software programs in the memory 260 to the RAM251 and then starts running the various software programs.
And a graphic processor 253 for generating various graphic objects such as icons, operation menus, and user input instruction display graphics, etc. The graphic processor 253 may include an operator for performing an operation by receiving various interactive instructions input by a user, and further displaying various objects according to display attributes; and a renderer for generating various objects based on the operator and displaying the rendered result on the display 275.
A CPU processor 254 for executing the operating system and software program instructions stored in memory 260. And executing processing of various software programs, data and content according to the received user input instructions so as to finally display and play various audio-video contents.
In some example embodiments, the CPU processor 254 may comprise a plurality of processors. The plurality of processors may include one main processor and a plurality of or one sub-processor. A main processor for performing some initialization operations of the terminal device 10 in the display device preload mode and/or operations for displaying images in the normal mode. A plurality of or one sub-processor for performing an operation in a state of a standby mode or the like of the display apparatus.
The communication interface 255 may include a first interface to an nth interface. These interfaces may be network interfaces that are connected to external devices via a network.
The controller 250 may control the overall operation of the terminal device 10. For example: in response to receiving a user input command for selecting a GUI object displayed on the display 275, the controller 250 may perform an operation related to the object selected by the user input command.
Where the object may be any one of the selectable objects, such as a hyperlink or an icon. The operation related to the selected object is, for example, an operation of displaying a link to a hyperlink interface, a document, an image, or the like, or an operation of executing a program corresponding to the object. The user input command for selecting the GUI object may be a command input through various input means (e.g., a mouse, a keyboard, a touch panel, etc.) connected to the terminal device 10 or a voice command corresponding to a voice spoken by the user.
The memory 260 is used for storing various types of data, software programs, or software programs for driving and controlling the operation of the terminal device 10. The memory 260 may include volatile and/or nonvolatile memory. And the term "memory" includes the memory 260, the RAM251 and the ROM252 of the controller 250, or a memory card in the terminal device 10.
In some embodiments, the memory 260 is specifically configured to store an operating program for driving the controller 250 in the terminal device 10; various software programs built in the storage terminal device 10 and downloaded by a user from an external device; data such as visual effect images for configuring various GUIs provided by the display 275, various objects related to the GUIs, and selectors for selecting GUI objects are stored.
In some embodiments, memory 260 is specifically configured to store drivers for tuner demodulator 210, communicator 220, detector 230, external device interface 240, video processor 270, display 275, audio processor 280, etc., and related data, such as external data (e.g., audio-visual data) received from the external device interface or user data (e.g., key information, voice information, touch information, etc.) received by the user interface. In some embodiments, memory 260 specifically stores software and/or programs representing an Operating System (OS), which may include, for example: a kernel, middleware, a software programming interface (API), and/or a software program. Illustratively, the kernel may control or manage system resources, as well as functions implemented by other programs (e.g., middleware, APIs, or software programs); at the same time, the kernel may provide an interface to allow middleware, an API, or a software program to access the controller to implement controlling or managing system resources.
The method for controlling live audio provided by the exemplary embodiment of the present application is described below with reference to fig. 3 in conjunction with the software scenario described above. Fig. 3 exemplarily provides steps of a method for controlling live audio in an embodiment of the present application, which specifically include:
and 300, responding to a live sound mixing instruction, and mixing a first audio signal collected by a microphone and a second audio signal output by the multimedia software.
The live broadcast sound mixing instruction is triggered by a user through live broadcast control software in the terminal equipment. Fig. 4 is a display interface diagram of the live control software for triggering the live sound mixing instruction in the present application, which is exemplarily provided. The display interface comprises a plurality of controls, each control corresponds to one instruction, and as shown in fig. 4, controls such as a high-quality male voice, a high-quality female voice, a doll voice, reverberation, a one-key live broadcast and the like are displayed in the display interface.
When "one-click live broadcast" is clicked on the display page of fig. 4, but any one of "good male voice, good female voice, doll voice, and reverberation" is not selected, a prompt window indicating what kind of voice the user voice is played with will be displayed on the display page. Therefore, after the user selects any one of the "high-quality male voice, high-quality female voice, doll voice and reverberation" displayed in the display interface of fig. 4, the live broadcast voice mixing instruction can be started only by triggering the "one-key live broadcast".
The high-quality male voice is used for converting a first audio signal acquired from a microphone into an audio signal corresponding to the high-quality male voice, and then mixing the audio signal with a second audio signal output by the multimedia software;
the 'good female voice' is used for indicating that after the first audio signal acquired from the microphone is changed into the audio signal corresponding to the 'good female voice', the first audio signal is mixed with the second audio signal output by the multimedia software;
the doll sound is used for changing the first audio signal acquired from the microphone into an audio signal corresponding to the doll sound and then mixing the audio signal with a second audio signal output by the multimedia software;
the term "reverberation" is used to mean that a first audio signal acquired from a microphone is directly mixed with a second audio signal output by multimedia software.
If the "one-click live broadcast" button is not clicked, the user's original sound and the music carried by the live broadcast software are used as the accompaniment.
Therefore, in the present application, when a live sound mixing instruction is responded, and a first audio signal collected by a microphone and a second audio signal output by multimedia software are mixed, it should be determined whether the live sound mixing instruction includes a sound change instruction.
The first condition is as follows: the live broadcast sound mixing instruction comprises a sound changing instruction.
If the voice change instruction is included, whether the first audio signal collected from the microphone is changed into the audio signal of 'good male voice', the audio signal of 'good female voice' or the audio signal of 'doll voice' is further judged.
Audio signal of "good quality male voice":
according to the sound change instruction, determining that the target sound type indicated by the sound change instruction is 'good male sound';
determining an audio parameter corresponding to 'high-quality male voice' according to a preset corresponding relation between the voice type and the audio parameter; and modifying the audio parameters in the first audio signal into audio parameters corresponding to 'good male voice', namely completing the sound change processing of the first audio signal, and mixing the first audio signal after the sound change processing and a second audio signal output by the multimedia software.
It should be noted that, the sound change processing for the audio signal of the "good female sound" and the audio signal of the "baby sound" is the same as the audio signal of the "good male sound", and will not be repeated.
Case two: the live sound mixing instruction does not include a change sound instruction.
If the audio change instruction is not included, it is indicated that the first audio signal collected from the microphone and the second audio signal output by the multimedia software are directly mixed without performing the audio change processing on the first audio signal collected from the microphone.
In the application, in the process of responding to the live sound mixing instruction and mixing the first audio signal collected by the microphone and the second audio signal output by the multimedia software, if the multimedia software is not started in the terminal equipment or the first audio signal is not collected from the microphone, the mixing processing is not needed. Specifically, there may be the following:
if the first audio signal is not collected from the microphone and a second audio signal output by the multimedia software exists, directly taking the second audio signal as the audio signal after mixing processing; or
If the first audio signal is collected from the microphone, the live broadcast sound mixing instruction does not comprise a sound change instruction, and a second audio signal output by multimedia software does not exist, the first audio signal is directly used as the audio signal after mixing processing; or
If the first audio signal is collected from the microphone, the live broadcast sound mixing instruction comprises a sound change instruction, and a second audio signal output by the multimedia software does not exist, the first audio signal is subjected to sound change processing, and then the first audio signal subjected to sound change processing is directly used as the audio signal subjected to mixing processing.
In this application, when there are a first audio signal collected from a microphone and a second audio signal output by multimedia software, the first audio signal collected by the microphone and the second audio signal output by the multimedia software are mixed, specifically, the mixing is performed as follows:
converting the first audio signal into a first digital audio signal and converting the second audio signal into a second digital audio signal;
and adding the first data audio signal and the second digital audio signal in a time domain to obtain a mixed digital audio signal.
Step 301, sending the audio signal after mixing processing to the currently running live broadcast software.
Fig. 5 exemplarily provides a schematic frame diagram of a live control audio, where multimedia software, live software, and live control software are installed and run in a device for live control audio, where the live control software is used to trigger a live sound mixing instruction, the multimedia software is used to output a second audio signal, and the live software is used for live broadcasting.
It should be noted that, because the live broadcast control software is included in the application, when the live broadcast control software triggers the live broadcast sound mixing instruction, the one-key live broadcast mode is entered, and at this time, even if the live broadcast software is started, the multimedia software is not paused, and the second audio signal is still output.
Specifically, in conjunction with fig. 5 and fig. 6, fig. 6 exemplarily shows a schematic circuit diagram of a live control audio in an embodiment of the present application.
After a live broadcast sound mixing instruction is triggered in live broadcast control software, a CPU (central processing unit) in the equipment for live broadcast audio responds to the live broadcast sound mixing instruction of the terminal control software, the live broadcast sound mixing instruction is forwarded to a DSP (digital signal processor) in the equipment for live broadcast audio, and the DSP performs mixing processing on a first audio signal acquired by a microphone and a second audio signal output by multimedia software.
When the multimedia software outputs a second audio signal, the CPU in the equipment for controlling the audio by live broadcasting transmits the second audio signal to an audio Codec in the equipment for controlling the audio by live broadcasting through an integrated circuit built-in audio bus I2S, the audio Codec codes the second audio signal sent by the CPU, converts an analog audio signal into a digital audio signal, and transmits the converted digital audio signal to the DSP through I2S;
meanwhile, the DSP collects a first audio signal through a microphone, at the moment, the first audio signal and a second audio signal are mixed in the DSP, and the mixed digital audio signal is transmitted to an audio Codec through I2S;
the audio Codec decodes the mixed digital audio signal transmitted by the DSP, converts the digital audio signal into an analog audio signal, transmits the converted analog audio signal including the first audio signal and the second audio signal to the CPU through the integrated circuit built-in audio bus I2S, and sends the converted analog audio signal to the live broadcasting software currently running.
It should be noted that, when the multimedia software outputs the second audio signal, in order to prevent the second audio signal output by the speaker from being included in the first audio signal collected by the microphone, which results in the noise of the sound when the DSP performs mixing processing on the first audio signal and the second audio signal, the earphone needs to be connected to the earphone interface corresponding to the speaker, so as to ensure the sound receiving effect of the microphone.
In addition, the first audio signal may be subjected to sound change processing during the mixing processing, which may specifically refer to the description of the embodiment corresponding to fig. 3, and will not be described herein again.
Fig. 7 exemplarily provides a flowchart of an overall method for live control audio, in which multimedia software outputs a second audio signal during the live control audio, and the live control software triggers a live sound mixing instruction, and the method includes the following steps:
step 700, responding to a live broadcast sound mixing instruction;
step 701, acquiring a first audio signal through a microphone;
step 702, collecting a second audio signal output by the multimedia software;
step 703, judging whether the live broadcast sound mixing instruction comprises a sound change instruction, if so, executing step 704, otherwise, executing step 706;
step 704, judging whether the voice changing instruction is good male voice, good female voice or doll voice;
step 705, mixing the first audio signal with the second audio signal after the sound changing processing;
step 706, directly mixing the first audio signal and the second audio signal;
and step 707, sending the audio signal after mixing processing to the currently running live broadcast software.
Based on the same inventive concept, the embodiment of the present application further provides a device for controlling live audio, and as the device corresponds to the method for controlling live audio in the embodiment of the present application, and the principle of the device for solving the problem is similar to that of the method, the implementation of the device may refer to the implementation of the method, and repeated details are omitted.
Fig. 8 exemplarily provides an apparatus for controlling live audio in an embodiment of the present application, where the apparatus includes: a CPU800, a DSP801, and a microphone 802, wherein:
the CPU800 is used for responding to the live broadcast sound mixing instruction and forwarding the live broadcast sound mixing instruction to the DSP;
the DSP801 is configured to respond to a live broadcast sound mixing instruction, perform mixing processing on a first audio signal acquired by the microphone 802 and a second audio signal output by the multimedia software, return the mixed audio signal to the CPU800, and send the mixed audio signal to currently running live broadcast software through the CPU 800.
In one possible implementation, the DSP801 is specifically configured to:
if the live broadcast sound mixing instruction comprises a sound change instruction, mixing the first audio signal with the second audio signal after sound change processing; or
And if the live broadcast sound mixing instruction does not comprise the sound change instruction, directly mixing the first audio signal and the second audio signal.
In one possible implementation, the DSP801 is specifically configured to:
if the first audio signal is not acquired and a second audio signal exists, directly transmitting the second audio signal to currently running live broadcast software; or
If the first audio signal is acquired, the live broadcast sound mixing instruction does not comprise a sound change instruction, and a second audio signal does not exist, the first audio signal is directly sent to currently running live broadcast software; or
And if the first audio signal is acquired, the live broadcast sound mixing instruction comprises a sound change instruction, and no second audio signal exists, the first audio signal is subjected to sound change processing and then sent to currently running live broadcast software.
In one possible implementation, the DSP801 is further configured to:
determining a target sound type indicated by the sound change instruction according to the sound change instruction;
determining an audio parameter corresponding to the target sound type according to the corresponding relation between the preset sound type and the audio parameter;
and modifying the audio parameters in the first audio signal into audio parameters corresponding to the target sound type.
In one possible implementation, the DSP801 is specifically configured to:
converting the first audio signal into a first digital audio signal and converting the second audio signal into a second digital audio signal;
and adding the first data audio signal and the second digital audio signal in a time domain to obtain a mixed digital audio signal.
Fig. 9 exemplarily provides an apparatus for controlling live audio in an embodiment of the present application, where the apparatus includes:
the processing unit 900 is configured to respond to a live sound mixing instruction, and perform mixing processing on a first audio signal acquired by a microphone and a second audio signal output by the multimedia software;
a sending unit 901, configured to send the audio signal after the mixing processing to currently running live broadcast software.
In one possible implementation, the processing unit 900 is specifically configured to:
if the live broadcast sound mixing instruction comprises a sound change instruction, mixing the first audio signal with the second audio signal after sound change processing; or
And if the live broadcast sound mixing instruction does not comprise the sound change instruction, directly mixing the first audio signal and the second audio signal.
In one possible implementation, the processing unit 900 is specifically configured to:
if the first audio signal is not acquired and a second audio signal exists, directly transmitting the second audio signal to currently running live broadcast software; or
If the first audio signal is acquired, the live broadcast sound mixing instruction does not comprise a sound change instruction, and a second audio signal does not exist, the first audio signal is directly sent to currently running live broadcast software; or
And if the first audio signal is acquired, the live broadcast sound mixing instruction comprises a sound change instruction, and no second audio signal exists, the first audio signal is subjected to sound change processing and then sent to currently running live broadcast software.
In one possible implementation, the processing unit 900 is specifically configured to:
determining a target sound type indicated by the sound change instruction according to the sound change instruction;
determining an audio parameter corresponding to the target sound type according to the corresponding relation between the preset sound type and the audio parameter;
and modifying the audio parameters in the first audio signal into audio parameters corresponding to the target sound type.
In one possible implementation, the processing unit 900 is specifically configured to:
converting the first audio signal into a first digital audio signal and converting the second audio signal into a second digital audio signal;
and adding the first data audio signal and the second digital audio signal in a time domain to obtain a mixed digital audio signal.
In some possible embodiments, the aspects of the method for controlling live audio provided by the present application may also be implemented in the form of a program product including program code for causing a computer device to perform the steps in the method for controlling live audio according to various exemplary embodiments of the present application described above in this specification when the program product is run on the computer device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product for controlling live audio of embodiments of the present application may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be executable on a computing device.
A readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with a command execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such division is merely exemplary and not mandatory. Indeed, the features and functions of two or more units described above may be embodied in one unit, according to embodiments of the application. Conversely, the features and functions of one unit described above may be further divided into embodiments by a plurality of units.
Further, while the operations of the methods of the present application are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A method of controlling live audio, the method comprising:
responding to a live sound mixing instruction, and mixing a first audio signal collected by a microphone and a second audio signal output by multimedia software;
and sending the audio signal after the mixing processing to the currently running live broadcast software.
2. The method of claim 1, wherein mixing the first audio signal collected by the microphone and the second audio signal output by the multimedia software in response to a live sound mixing instruction comprises:
if the live broadcast sound mixing instruction comprises a sound change instruction, mixing the first audio signal with the second audio signal after sound change processing; or
And if the live broadcast sound mixing instruction does not comprise a sound change instruction, directly mixing the first audio signal and the second audio signal.
3. The method of claim 1, wherein mixing the first audio signal collected by the microphone and the second audio signal output by the multimedia software in response to a live sound mixing instruction comprises:
if the first audio signal is not acquired and the second audio signal exists, directly transmitting the second audio signal to currently-operated live broadcast software; or
If the first audio signal is acquired, the live broadcast sound mixing instruction does not comprise a sound change instruction, and the second audio signal does not exist, the first audio signal is directly sent to currently running live broadcast software; or
And if the first audio signal is acquired, the live broadcast sound mixing instruction comprises a sound change instruction, and the second audio signal does not exist, the first audio signal is subjected to sound change processing and then sent to currently running live broadcast software.
4. A method as claimed in claim 2 or 3, wherein the live sound mixing instructions comprise voicing instructions to voicing the first audio signal, comprising:
determining a target sound type indicated by the sound change instruction according to the sound change instruction;
determining an audio parameter corresponding to the target sound type according to a preset corresponding relation between the sound type and the audio parameter;
and modifying the audio parameters in the first audio signal into audio parameters corresponding to the target sound type.
5. The method of claim 1, wherein the mixing the first audio signal collected by the microphone and the second audio signal output by the multimedia software comprises:
converting the first audio signal to a first digital audio signal and converting the second audio signal to a second digital audio signal;
and adding the first data audio signal and the second digital audio signal in a time domain to obtain a mixed digital audio signal.
6. An apparatus for controlling live audio, the apparatus comprising: central processing unit CPU, digital signal processor DSP and microphone, wherein:
the CPU is used for responding to a live broadcast sound mixing instruction and forwarding the live broadcast sound mixing instruction to the DSP;
and the DSP is used for responding to the live broadcast sound mixing instruction, mixing the first audio signal collected by the microphone and the second audio signal output by the multimedia software, returning the mixed audio signal to the CPU, and sending the mixed audio signal to the currently running live broadcast software through the CPU.
7. The device of claim 6, wherein the DSP is specifically configured to:
if the live broadcast sound mixing instruction comprises a sound change instruction, mixing the first audio signal with the second audio signal after sound change processing; or
And if the live broadcast sound mixing instruction does not comprise a sound change instruction, directly mixing the first audio signal and the second audio signal.
8. The device of claim 6, wherein the DSP is specifically configured to:
if the first audio signal is not acquired and the second audio signal exists, directly transmitting the second audio signal to currently-operated live broadcast software; or
If the first audio signal is acquired, the live broadcast sound mixing instruction does not comprise a sound change instruction, and the second audio signal does not exist, the first audio signal is directly sent to currently running live broadcast software; or
And if the first audio signal is acquired, the live broadcast sound mixing instruction comprises a sound change instruction, and the second audio signal does not exist, the first audio signal is subjected to sound change processing and then sent to currently running live broadcast software.
9. The device of claim 6 or 7, wherein the DSP is further to:
determining a target sound type indicated by the sound change instruction according to the sound change instruction;
determining an audio parameter corresponding to the target sound type according to a preset corresponding relation between the sound type and the audio parameter;
and modifying the audio parameters in the first audio signal into audio parameters corresponding to the target sound type.
10. The device of claim 6, wherein the DSP is specifically configured to:
converting the first audio signal to a first digital audio signal and converting the second audio signal to a second digital audio signal;
and adding the first data audio signal and the second digital audio signal in a time domain to obtain a mixed digital audio signal.
CN202011241301.5A 2020-11-09 2020-11-09 Method and equipment for controlling live broadcast audio Pending CN112423009A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011241301.5A CN112423009A (en) 2020-11-09 2020-11-09 Method and equipment for controlling live broadcast audio

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011241301.5A CN112423009A (en) 2020-11-09 2020-11-09 Method and equipment for controlling live broadcast audio

Publications (1)

Publication Number Publication Date
CN112423009A true CN112423009A (en) 2021-02-26

Family

ID=74780848

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011241301.5A Pending CN112423009A (en) 2020-11-09 2020-11-09 Method and equipment for controlling live broadcast audio

Country Status (1)

Country Link
CN (1) CN112423009A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113096674A (en) * 2021-03-30 2021-07-09 联想(北京)有限公司 Audio processing method and device and electronic equipment
CN113132794A (en) * 2021-05-13 2021-07-16 北京字节跳动网络技术有限公司 Live background sound processing method, device, equipment, medium and program product
CN114390304A (en) * 2021-12-20 2022-04-22 北京达佳互联信息技术有限公司 Live broadcast sound changing method and device, electronic equipment and storage medium
WO2022237464A1 (en) * 2021-05-13 2022-11-17 北京字节跳动网络技术有限公司 Audio synthesis method and apparatus, and device, medium and program product
WO2023030536A1 (en) * 2021-09-06 2023-03-09 北京字跳网络技术有限公司 Harmony processing method and apparatus, device, and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105872253A (en) * 2016-05-31 2016-08-17 腾讯科技(深圳)有限公司 Live broadcast sound processing method and mobile terminal
US20170286053A1 (en) * 2016-03-30 2017-10-05 Le Holdings(Beijing)Co., Ltd. System and method for real-time adjustment of volume during live broadcasting
CN109767777A (en) * 2019-01-31 2019-05-17 迅雷计算机(深圳)有限公司 A kind of sound mixing method that software is broadcast live
CN109788139A (en) * 2019-03-05 2019-05-21 北京会播科技有限公司 Mobile phone with direct broadcast function

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170286053A1 (en) * 2016-03-30 2017-10-05 Le Holdings(Beijing)Co., Ltd. System and method for real-time adjustment of volume during live broadcasting
CN105872253A (en) * 2016-05-31 2016-08-17 腾讯科技(深圳)有限公司 Live broadcast sound processing method and mobile terminal
CN109767777A (en) * 2019-01-31 2019-05-17 迅雷计算机(深圳)有限公司 A kind of sound mixing method that software is broadcast live
CN109788139A (en) * 2019-03-05 2019-05-21 北京会播科技有限公司 Mobile phone with direct broadcast function

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113096674A (en) * 2021-03-30 2021-07-09 联想(北京)有限公司 Audio processing method and device and electronic equipment
CN113096674B (en) * 2021-03-30 2023-02-17 联想(北京)有限公司 Audio processing method and device and electronic equipment
CN113132794A (en) * 2021-05-13 2021-07-16 北京字节跳动网络技术有限公司 Live background sound processing method, device, equipment, medium and program product
WO2022237463A1 (en) * 2021-05-13 2022-11-17 北京字节跳动网络技术有限公司 Livestreaming background sound processing method and apparatus, device, medium, and program product
WO2022237464A1 (en) * 2021-05-13 2022-11-17 北京字节跳动网络技术有限公司 Audio synthesis method and apparatus, and device, medium and program product
WO2023030536A1 (en) * 2021-09-06 2023-03-09 北京字跳网络技术有限公司 Harmony processing method and apparatus, device, and medium
CN114390304A (en) * 2021-12-20 2022-04-22 北京达佳互联信息技术有限公司 Live broadcast sound changing method and device, electronic equipment and storage medium
CN114390304B (en) * 2021-12-20 2023-08-08 北京达佳互联信息技术有限公司 Live broadcast sound changing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111741372B (en) Screen projection method for video call, display device and terminal device
CN112423009A (en) Method and equipment for controlling live broadcast audio
CN111669621B (en) Media asset data issuing method, server and display device
CN111447498A (en) Awakening method of display equipment and display equipment
CN112367543B (en) Display device, mobile terminal, screen projection method and screen projection system
US20220116676A1 (en) Display apparatus and content display method
CN111752518A (en) Screen projection method of display equipment and display equipment
WO2021109418A1 (en) Video resource display method, mobile terminal and server
CN111935518B (en) Video screen projection method and display device
CN112272417B (en) double-Bluetooth sound box reconnection method and display device
US20210289263A1 (en) Data Transmission Method and Device
CN112135180B (en) Content display method and display equipment
CN111836115B (en) Screen saver display method, screen saver skipping method and display device
CN112165641A (en) Display device
CN112422365A (en) Display device and method for automatically monitoring network state
CN112437334A (en) Display device
CN111954059A (en) Screen saver display method and display device
US8891015B2 (en) Electronic apparatus and display control method
CN112399217B (en) Display device and method for establishing communication connection with power amplifier device
CN111954043B (en) Information bar display method and display equipment
WO2021184575A1 (en) Display device and display method
CN111984167A (en) Rapid naming method and display device
CN111263223A (en) Media volume adjusting method and display device
CN111669662A (en) Display device, video call method and server
CN112017415A (en) Recommendation method of virtual remote controller, display device and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210226

RJ01 Rejection of invention patent application after publication