CN118101629A - Audio live broadcast and audio processing method, device and storage medium - Google Patents

Audio live broadcast and audio processing method, device and storage medium Download PDF

Info

Publication number
CN118101629A
CN118101629A CN202410176514.6A CN202410176514A CN118101629A CN 118101629 A CN118101629 A CN 118101629A CN 202410176514 A CN202410176514 A CN 202410176514A CN 118101629 A CN118101629 A CN 118101629A
Authority
CN
China
Prior art keywords
audio
live
audio data
device file
push
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410176514.6A
Other languages
Chinese (zh)
Inventor
邸文华
张华宾
李娟�
张焱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dushi Technology Co ltd
Original Assignee
Beijing Dushi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dushi Technology Co ltd filed Critical Beijing Dushi Technology Co ltd
Priority to CN202410176514.6A priority Critical patent/CN118101629A/en
Publication of CN118101629A publication Critical patent/CN118101629A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The application discloses an audio live broadcast and audio processing method, an audio live broadcast and audio processing device and a storage medium. A live device, comprising: generating push audio for pushing to the live platform; writing first audio data of push audio into a first device file, wherein the first device file corresponds to a first microphone device associated with a live application; and reading the first audio data from the first device file by using the live application program through the operating system and transmitting the first audio data to the live platform, wherein the live application program corresponds to the live platform.

Description

Audio live broadcast and audio processing method, device and storage medium
Technical Field
The present application relates to the field of live broadcasting technologies, and in particular, to a method and apparatus for audio live broadcasting and audio processing, and a storage medium.
Background
Currently, with the progress of network communication technology, network live broadcast is increasingly developed and applied. A live device for network live is proposed to generate push audio to a live platform based on audio data from different audio sources.
In reality, users of live devices, i.e., anchor personnel, are typically deep-bound with each live platform. It is therefore inevitable that a presenter, while using a live device, wishes to be able to live using a desired live application (where the live application corresponds to a live platform) in order to push audio to the desired live platform. However, since existing live applications (e.g., a tremble application, a quick-hold application, etc.) are developed based on a tablet device such as a mobile phone, the live application defaults to acquiring audio from a microphone of the tablet device.
In this case, even if a live application (e.g., a tremble application, a quick-hand application, etc.) corresponding to a live platform is installed in the live device, the live application cannot push audio generated by the live device to the corresponding live platform. Therefore, the anchor personnel cannot push audio generated by the live broadcast equipment to the expected live broadcast platform through the expected live broadcast application program, so that audio live broadcast cannot be performed on the expected live broadcast platform by utilizing the live broadcast equipment.
Aiming at the technical problem that the anchor personnel in the prior art cannot push audio generated by live broadcast equipment to a corresponding live broadcast platform through a desired live broadcast application program, an effective solution is not proposed at present.
Disclosure of Invention
The embodiment of the disclosure provides an audio live broadcast and audio processing method, device and storage medium, which at least solve the technical problem that a host player in the prior art cannot push audio generated by live broadcast equipment to a corresponding live broadcast platform through a desired live broadcast application program.
According to an aspect of the embodiments of the present disclosure, there is provided an audio live broadcast method, including: generating push audio for pushing to the live platform; writing first audio data of push audio into a first device file, wherein the first device file corresponds to a first microphone device associated with a live application; and reading the first audio data from the first device file by using the live application program through the operating system and transmitting the first audio data to the live platform, wherein the live application program corresponds to the live platform.
According to another aspect of the embodiments of the present disclosure, there is also provided an audio processing method, including: generating push audio for pushing to the live platform; and writing first audio data of the push audio to a first device file configured by the operating system, wherein the first device file corresponds to a first microphone device associated with the live application and the live application corresponds to the live platform.
According to another aspect of the embodiments of the present disclosure, there is also provided a storage medium including a stored program, wherein the method of any one of the above is performed by a processor when the program is run.
According to another aspect of the embodiments of the present disclosure, there is also provided an audio live broadcast apparatus, including: the first push audio generation module is used for generating push audio for pushing to the live broadcast platform; the first push audio writing module is used for writing first audio data of push audio into a first device file, wherein the first device file corresponds to a first microphone device associated with the live application program; and the first audio data reading module is used for reading the first audio data from the first equipment file by using the live broadcast application program through the operating system and transmitting the first audio data to the live broadcast platform, wherein the live broadcast application program corresponds to the live broadcast platform.
According to another aspect of the embodiments of the present disclosure, there is also provided an audio processing apparatus including: and the second push audio generation module is used for writing first audio data of push audio into a first device file configured by the operating system, wherein the first device file corresponds to a first microphone device associated with the live application program, and the live application program corresponds to the live platform.
According to another aspect of the embodiments of the present disclosure, there is also provided an audio live broadcast apparatus, including: a first processor; and a first memory, coupled to the first processor, for providing instructions to the first processor to process the steps of: generating push audio for pushing to the live platform; writing first audio data of push audio into a first device file, wherein the first device file corresponds to a first microphone device associated with a live application; and reading the first audio data from the first device file by using the live application program through the operating system and transmitting the first audio data to the live platform, wherein the live application program corresponds to the live platform.
According to another aspect of the embodiments of the present disclosure, there is also provided an audio processing apparatus including: a second processor; and a second memory, coupled to the second processor, for providing instructions to the second processor to process the following processing steps; generating push audio for pushing to the live platform; and writing first audio data of the push audio to a first device file configured by the operating system, wherein the first device file corresponds to a first microphone device associated with the live application and the live application corresponds to the live platform.
The application provides an audio live broadcast method which is applied to live broadcast equipment. First, the audio processing application generates push audio for pushing to the live platform. The audio processing application then writes the first audio data of the push audio to the first device file. Wherein the first device file corresponds to a first microphone device associated with the live application. Finally, the live application reads the first audio data from the first device file via the operating system and transmits the first audio data to the live platform. Wherein the live application corresponds to the live platform.
Because the first device file corresponds to the first microphone device associated with the live broadcast application program, and the live broadcast application program corresponds to the live broadcast platform, after the operating system receives a request sent by the live broadcast application program to acquire data from the first microphone device, the operating system directly reads corresponding data from the first device file and sends the read data to the live broadcast application program.
Further, since the present application writes the first audio data of the push audio into the first device file, the data read by the operating system from the first device file is the first audio data of the push audio. Thus, after the operating system sends the first audio data to the live application, the live application transmits the first audio data to the live platform. Therefore, the audience user can hear the live audio corresponding to the push audio from the corresponding live platform through the terminal equipment.
Therefore, the technical effect of ensuring that the anchor personnel can utilize the expected live platform to conduct live broadcast is achieved. And further, the technical problem that a host player in the prior art cannot push audio generated by live broadcast equipment to a corresponding live broadcast platform through a desired live broadcast application program is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure, illustrate and explain the present disclosure, and together with the description serve to explain the present disclosure. In the drawings:
fig. 1 is a schematic diagram of an audio live system and a plurality of live platforms according to embodiment 1 of the present application;
Fig. 2 is a schematic diagram of hardware modules of an audio live system and a plurality of live platforms according to embodiment 1 of the present application;
fig. 3 is a schematic diagram of a hierarchical relationship structure of an audio live system according to embodiment 1 of the present application;
fig. 4 is a flowchart of an audio live method according to embodiment 1 of the present application;
Fig. 5 is a flowchart of an audio processing method according to embodiment 1 of the present application;
Fig. 6 is a schematic diagram of an audio live device according to embodiment 2 of the present application;
fig. 7 is a schematic diagram of an audio processing apparatus according to embodiment 2 of the present application;
fig. 8 is a schematic diagram of an audio live device according to embodiment 3 of the present application; and
Fig. 9 is a schematic diagram of an audio processing apparatus according to embodiment 3 of the present application.
Detailed Description
In order to better understand the technical solutions of the present disclosure, the following description will clearly and completely describe the technical solutions of the embodiments of the present disclosure with reference to the drawings in the embodiments of the present disclosure. It will be apparent that the described embodiments are merely embodiments of a portion, but not all, of the present disclosure. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure, shall fall within the scope of the present disclosure.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, partial terms or terminology appearing in describing embodiments of the present disclosure are applicable to the following explanation:
Pushing audio: the push audio of the invention refers to audio which is generated by live broadcast equipment and sent to a live broadcast platform so as to be subjected to live broadcast.
Live audio: the live audio of the invention refers to audio corresponding to push audio sent by a live platform to terminal equipment of each audience user.
Live broadcast equipment: the live broadcast equipment provided by the invention is equipment with a live broadcast function, and the live broadcast function is a function of sending generated push audio to a live broadcast platform. Thus, the live broadcast equipment also comprises a computer, a mobile phone, a tablet personal computer and the like for installing the live broadcast application program.
Example 1
According to the present embodiment, there is provided a method embodiment of audio live and audio processing, it being noted that the steps shown in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and although a logical order is shown in the flowcharts, in some cases the steps shown or described may be performed in an order different from that shown or described herein.
Fig. 1 is a schematic diagram of an audio live system and a plurality of live platforms according to embodiment 1 of the present application. Referring to fig. 1, the audio live system includes: a live device 100 and a plurality of different types of audio sources 201-204.
Referring to fig. 1, audio sources 201-203 may be connected to the live device 100, for example, through corresponding audio source input interfaces. The audio source 204 may also be communicatively coupled to the live device 100, for example (wherein the network used by the audio source 204 to communicatively couple to the live device 100 includes, but is not limited to, a mobile communication network, a wireless network, etc.). Furthermore, although not shown in fig. 1, for example, an audio source 205 disposed inside the live device 100 may also be included. And audio sources 201-205 may include, for example, 3.5 microphone 201, computer 202, USB microphone 203, and cell phone 204. It should be noted that, as will be apparent to those skilled in the art, the foregoing is merely illustrative of the type of audio source and the connection manner of the audio source to the live broadcast device 100, and the actual situation is not limited thereto.
Further, as shown with reference to fig. 1, a plurality of live platforms 301 to 30n are also included. After generating the push audio, the processor in the live device 100 transmits first audio data corresponding to the push audio to the live platforms 301 to 30n. Accordingly, the viewer user can receive live audio corresponding to the push audio from the corresponding live platforms 301 to 30n through the terminal device. The live broadcast platform can be a tremble live broadcast platform, a quick live broadcast platform, a tiger tooth live broadcast platform and the like.
It is noted that although not shown in fig. 1, the audio source may be, for example, audio stored locally in the live device 100 or network audio accessed by a processor in the live device 100 through a link address, etc., and is not particularly limited herein.
Fig. 2 is a schematic diagram of hardware modules of an audio live system and a plurality of live platforms according to embodiment 1 of the present application. Referring to fig. 2, a 3.5 microphone 201 is connected to the processor 110 in the live device 100 through an audio codec 111; the computer 202 is connected with the processor 110 in the live broadcast device 100 through the HDMI interface 112; the USB microphone 203 is connected to the processor 110 in the live device 100 through the USB interface 113; the handset 204 is communicatively coupled to the processor 110 in the live device 100 via the network communication circuit 114; a built-in microphone 115 is also connected to the processor 110 through the audio codec 111.
Further, referring to fig. 2, the processor 110 in the live device 100 is communicatively connected to the live platforms 301 to 30n via the network communication circuit 114. So that the first audio data corresponding to the push audio can be transmitted to the live platforms 301 to 30n. Further, although not shown in fig. 2, the processor 110 may also be communicatively coupled to the live platforms 301-30 n via, for example, mobile communication circuitry.
It should be noted that the processor 110 may send the first audio data to one live platform, or may send the first audio data to a plurality of different live platforms respectively. The present invention is not particularly limited herein.
Fig. 3 is a schematic diagram of a hierarchical relationship of an audio live system according to an embodiment of the present application. Referring to fig. 3, the live device 100 has an operating system and hardware layers below the operating system deployed, and in this embodiment, android is described as an example of the operating system. And wherein, referring to FIG. 3, the operating system includes a driver layer, a HAL layer, a service layer, a framework layer, and an application layer.
Wherein an audio processing application is deployed at the application layer. The audio processing application is used to generate push audio during live broadcast. In addition, a plurality of live applications 1-n are deployed in the application layer. The live broadcast application programs 1-n are used for transmitting push audio to corresponding live broadcast platforms. The live applications 1 to n may be, for example, live applications (e.g., a tremble application, a fast-handed application, etc.) developed by a third party based on the android system.
Further, a first device file may be deployed in the operating system (e.g., the first device file may be a/dev/snd/pcmC D0c deployed under the/dev directory of the operating system). And wherein "C0" means that the sound card ID is 0, "D0" means that the device ID is 0, and "C" means for recording. And in case the operating system is deployed with the first device file, the first device file may be designated at the HAL layer as a device file corresponding to the first microphone device associated with the live application 1-n using the corresponding function. For example, the first microphone device associated with the live application 1-n is a microphone of a tablet device such as a cell phone, so that the first device file may be designated as a device file corresponding to the microphone associated with the live application 1-n at the HAL layer using a corresponding function. Thus, in the case where the audio processing application writes audio data to the first device file, the live applications 1 to n can receive audio data read from the first device file by the operating system.
Further, a second device file may be deployed in the operating system of the live device 100 (e.g., the second device file may be a/dev/snd/pcmC D0p deployed under the operating system's/dev directory). And wherein "C0" indicates that the sound card ID is 0, "D0" indicates that the device ID is 0, and "p" indicates for playback. Thus, referring to the above, it can be seen that the first device file and the second device file correspond to the same virtual sound card.
In addition, although not shown in fig. 3, a virtual sound card driver is further disposed at the driving layer, so that the virtual sound card driver can generate a first device file corresponding to a second microphone device of the virtual sound card and a second device file corresponding to a virtual player device of the virtual sound card at the driving layer.
The operating system may also create a corresponding third device file at the driver layer based on the type of audio source input interface at the hardware layer. For example, the hardware layer includes an audio codec 111, an HDMI interface 112, and a USB interface 113. For the audio codec 111, the operating system generates a corresponding third device file, which may be, for example, the operating system/dev/snd/pcmC D0c under the/dev directory, no matter whether the audio codec 111 has the 3.5 microphone 201 and the built-in microphone 115 connected; for the HDMI interface 112, no matter whether the HDMI interface 112 is connected to the computer 202, the operating system generates a corresponding third device file, which may be, for example, the operating system/dev/snd/pcmC D0c under the/dev directory; for the USB interface 113, only if the USB interface 113 is connected to the USB microphone 203, the operating system generates a corresponding third device file, which may be, for example, the operating system/dev/snd/pcmC D0c under the dev directory.
Furthermore, it should be noted that, in the present embodiment, android is described as an example of an operating system, but it should be clear to those skilled in the art that other types of operating systems are also within the scope of the present application.
In the above-described operating environment, according to a first aspect of the present embodiment, there is provided an audio live broadcast and audio processing method, which is implemented by the processor 110 shown in fig. 2. Fig. 4 shows a schematic flow chart of the method, and referring to fig. 4, the method includes:
s402: generating push audio for pushing to the live platform;
S404: writing first audio data of push audio into a first device file, wherein the first device file corresponds to a first microphone device associated with a live application; and
S406: and reading the first audio data from the first device file by using the live application program through the operating system, and transmitting the first audio data to the live platform, wherein the live application program corresponds to the live platform.
Specifically, first, in the case where the anchor person performs audio live broadcast, the anchor person generates push audio for pushing to the live platform using the live broadcast apparatus 100 (S402).
Further, the live device 100 writes the generated first audio data of the push audio to a first device file disposed in the operating system (S404). As described above, the first device file may be/dev/snd/pcmC D0c deployed under the operating system/dev directory. Whereby the live device 100 writes the generated first audio data of the push audio into the device file. Further, as described above, in the present embodiment, the first device file may be designated as the device file corresponding to the first microphone device associated with the live application 1 to n in advance using the corresponding function at the HAL layer. Specifically, in the present embodiment, the live application (for example, the tremble application, the quick-action application, and the like) is a live application developed by a third party based on a tablet device such as a mobile phone, and the live application acquires audio data from a microphone of the tablet device by default, so in the present embodiment, the first microphone device corresponds to the microphone by default of the live application. Further, in this embodiment, the corresponding function may be utilized in the HAL layer in advance to designate the first device file as a device file corresponding to the microphone of the tablet device, so that the live application may read the audio data from the first device file through the operating system. The foregoing will be described in detail later, and thus will not be described in detail here.
Finally, the live application reads the first audio data from the first device file via the operating system and transmits the first audio data to the live platform (S406). Referring to the above, since the first device file in the embodiment corresponds to the first microphone device associated with the live application, the live application may obtain, by using the operating system, first audio data corresponding to the push audio from the first device file, and send the first audio data as the push audio to the live platform.
Based on the foregoing background, existing live applications (e.g., a tremble application, a quick-action application, etc.) are developed based on a tablet device such as a mobile phone, and the live application obtains audio data from a microphone of the tablet device by default. For example, in the case where the host person needs to conduct audio live, the live application responds to the click operation of the live person and sends a request to the operating system within the tablet device to acquire audio from the microphone. The operating system then responds to the request and reads the audio data collected by the microphone from the device file corresponding to the microphone. The operating system then sends the read audio data to the live application so that the live application can generate push audio based on the audio data collected by the microphone and transmit the push audio to the broadcast platform.
Because the existing live broadcast application is developed based on the tablet device, that is, the live broadcast application defaults to acquire audio data from the microphone of the tablet device, even if a live broadcast application (for example, a tremble application, a fast-handed application, etc.) corresponding to the live broadcast platform is installed in the live broadcast device, the live broadcast application cannot push audio generated by the live broadcast device to the live broadcast platform expected by the host.
Thus, in order to solve the above-described problem, the present embodiment deploys the first device file in the operating system of the live device. The first device file corresponds to a first microphone device associated with the live application. Wherein the first microphone device may be, for example, a microphone device that is used by the live application by default (e.g., the first microphone device corresponds to a microphone of the tablet device). Thus, the live application program can acquire first audio data corresponding to the push audio from the first device file through the operating system, and send the first audio data to the live platform as the push audio.
As can be seen from the above description, since the first device file corresponds to the first microphone device associated with the live application, in any case, as long as the operating system receives the request sent by the live application to acquire the audio data from the first microphone device, the operating system will read the corresponding audio data from the first device file deployed in advance.
Thus, in the live process, the live device may, for example, write the first audio data of the push audio to the first device file. The live application program can acquire first audio data corresponding to the push audio from the first device file through the operating system, and sends the first audio data to the live platform as the push audio. And furthermore, the anchor personnel can send the push audio generated by the live broadcast equipment to the corresponding live broadcast platform through the expected live broadcast application program, and the audience user can watch the live broadcast audio corresponding to the push audio from the corresponding live broadcast platform through the terminal equipment.
Therefore, the technical effect of ensuring that the anchor personnel can utilize the live broadcast equipment to carry out live broadcast on the expected live broadcast platform is achieved. And further, the technical problem that a host person in the prior art cannot send push audio generated by live broadcast equipment to a corresponding live broadcast platform through a desired live broadcast application program is solved.
Optionally, the first device file further corresponds to a second microphone device disposed on a virtual sound card of the live device, the second microphone device is a virtual microphone device, and the operation of writing the first audio data of the push audio into the first device file includes: writing the push audio into a second device file, wherein the second device file corresponds to the virtual player device of the virtual sound card; and invoking push audio from the second device file using the virtual sound card and writing the push audio to the first device file.
Specifically, since the device file for recording and the device file for playing are actually two different device files, the same device file cannot be used for both playing and recording, in this embodiment, the driver layer of the operating system is deployed with a virtual sound card driver capable of generating a first device file corresponding to the virtual microphone device and a second device file corresponding to the virtual player device. For example, the first device file may be a/dev/snd/pcmC D0c deployed under the/dev directory of the operating system. And wherein "C0" means that the sound card ID is 0, "D0" means that the device ID is 0, and "C" means for recording. For another example, the second device file may be a/dev/snd/pcmC D0p deployed under the/dev directory of the operating system. And wherein "C0" indicates that the sound card ID is 0, "D0" indicates that the device ID is 0, and "p" indicates for playback.
Referring to the above, it can be seen that the first device file and the second device file actually correspond to the same virtual sound card, and are respectively used for "recording" and "playing".
Thus, for example, the first device file may be designated as a device file corresponding to the microphone of the tablet device at the HAL layer using a specific function, whereby in case the audio processing application writes the first audio data of the push audio to the second device file, the virtual sound card application invokes the first audio data from the second device file and writes the first audio data to the first device file. Thus, in the event that the operating system receives a request sent by the live application to obtain audio data from the first microphone device, the operating system reads the first audio data from the first device file and sends the first audio data to the corresponding live application.
Since the device file for recording and the device file for playing are actually two different device files, the same device file cannot be used for both playing and recording, in this embodiment, two first device files for recording and second device files for playing corresponding to the same virtual sound card are provided at the drive layer. Therefore, under the condition that the audio processing application program writes the first audio data into the second equipment file, the operating system can read the first audio data from the first equipment file corresponding to the second equipment file and send the first audio data to the live broadcast application program associated with the live broadcast platform, and the technical effect that the host can ensure that the host can normally conduct audio live broadcast by utilizing the live broadcast equipment is achieved.
Optionally, the operation of reading the first audio data from the first device file via the operating system with the live application includes: transmitting a request to acquire data from the first microphone device to an operating system using a live application; and receiving, with the live application, first audio data from the operating system read by the operating system from the first device file.
Specifically, referring to fig. 3, for example, in the case where a host person performs live broadcasting by using the live broadcasting device 100 through a live broadcasting platform corresponding to the live broadcasting applications 1 to n, the live broadcasting applications 1 to n on the live broadcasting device 100 respectively send requests for acquiring audio data from the respective associated first microphone devices (for example, microphones of tablet devices) to the operating system. Then, the operating system reads the first audio data from the first device file in response to the data acquisition request transmitted by the live application 1 to n, and transmits the read first audio data to the live application 1 to n, respectively.
Therefore, in the technical scheme of the application, the push audio can be pushed to the corresponding live platform by utilizing any live application program supported by the live equipment through the operating system. Therefore, for the anchor personnel, the live broadcast equipment can be utilized to carry out live broadcast on the live broadcast platform corresponding to the live broadcast application program only by installing the expected live broadcast application program on the operating system of the live broadcast equipment, thereby being convenient for the anchor personnel to use.
Optionally, the operation of generating push audio for pushing to the live platform includes: obtaining second audio data from at least one audio source, wherein the at least one audio source includes a third microphone device different from the first microphone device and the second microphone device; and generating push audio with the audio processing application based on the second audio data.
Specifically, referring to FIG. 3, first, an audio processing application obtains second audio data from at least one audio source. Wherein the at least one audio source includes a third microphone device that is different from the first microphone device and the second microphone device associated with the live application. And further, the second audio data may be, for example, audio data that the live device 100 obtained from the at least one audio source and that has not been further processed by the audio processing application.
The at least one audio source may be, for example, a microphone external to the live device 100, an external computer or a built-in microphone, audio stored locally in the live device 100, or network audio accessed by the live device 100 through a link address, etc. In summary, the at least one audio source is a different audio source with respect to the first microphone device associated with the live application and the second microphone device of the virtual sound card, without specific limitation herein.
Then, in the event that the audio processing application obtains second audio data from the at least one audio source, push audio for pushing to the live platform is generated based on the second audio data.
Unlike the existing anchor person which can only generate push audio by using a live broadcast application program installed on a mobile phone, a tablet computer or a computer, in the technical scheme of the application, at least one audio source including the mobile phone, the tablet computer or the computer can be connected to the live broadcast equipment, so that the audio processing application program can receive second audio data transmitted by the at least one audio source and generate push audio based on the second audio data.
Therefore, in the technical scheme of the application, compared with the push audio generated by directly utilizing the live broadcast application program installed on the mobile phone, the tablet computer or the computer, the push audio for pushing to the live broadcast platform contains more abundant audio information, so that the technical effect of attracting audience users to listen to live broadcast can be achieved.
Optionally, the operation of obtaining second audio data from at least one audio source comprises: and reading the second audio data from a third device file corresponding to the third microphone device by using the operating system.
Specifically, referring to fig. 3, first, the operating system automatically creates a corresponding third device file at the driving layer according to the type of the audio source input interface of the hardware layer, and writes the second audio data into the corresponding third device file.
When the audio source input interface is an audio codec and/or HDMI interface, the operating system may generate the corresponding third device file in advance, regardless of whether the audio codec and/or HDMI interface is connected to the corresponding audio source. And under the condition that the audio codec and/or the HDMI interface is accessed to a corresponding audio source, the operating system directly writes the received second audio data into a corresponding third device file.
For example, in case the audio source input interface is the audio codec 111 and the audio codec 111 in the live device 100 is connected 305 to the microphone 201 and/or the built-in microphone 115, the operating system directly writes the received second audio data in the corresponding third device file 1.
For another example, in a case where the audio source input interface is the HDMI interface 112 and the HDMI interface 112 of the live device 100 is connected to the computer 202, the operating system directly writes the received second audio data into the corresponding third device file 2.
And wherein the operating system dynamically generates the third device file 3 in case the audio source input interface is the USB interface 113. That is, in the case where the USB interface 113 is connected to the USB microphone 203, the operating system generates the corresponding third device file 3; in the case where the USB interface 113 has not been connected to the USB microphone 203, the operating system does not generate the corresponding third device file 3.
In addition, it should be noted that, in the case where the second audio data corresponding to the mobile phone 204 is received in the live broadcast device 100 through the network communication circuit 114, the operating system directly writes the second audio data into the memory, and the third device file does not need to be generated.
If the live device 100 does not have an external audio source and the second audio data transmitted to the audio processing application is audio stored locally by the live device 100 or accessed by the live device 100 via a link address, the operating system does not need to deploy a third device file at the driver layer. The operating system directly sends the second audio data of the locally stored audio or the second audio data of the network audio accessed through the link address to the audio processing application program of the application program layer.
It should be noted that there may be one or more third device files.
For example, in the case where one audio codec 111 or HDMI interface 112 exists in the live device 100, the operating system generates one corresponding third device file 1 or third device file 2 in advance. Then, the operating system writes the second audio data received through the audio codec 111 or the HDMI interface 112 into the corresponding third device file 1 or third device file 2.
For another example, in a case where both the audio codec 111 and/or the HDMI interface 112 and the USB interface 113 are present in the live device 100, the operating system generates the third device file 1 and/or the third device file 2 corresponding to the audio codec 111 and/or the HDMI interface 112 in advance. Then, in the case that the audio codec 111 is connected to the 3.5 microphone 201 and/or the HDMI interface 112 is connected to the computer 202, the operating system writes the corresponding second audio data into the third device file 1 and/or the third device file 2 which are generated in advance. In the case where the USB interface 113 is connected to the USB microphone 203, the operating system dynamically generates the corresponding third device file 3, and writes the second audio data received through the USB interface 113 into the corresponding third device file 3.
And then the operating system reads the second audio data from a third device file corresponding to the third microphone device and sends the second audio data to the audio processing application program, so that operations such as audio mixing processing and/or fusion processing are performed on the second audio data through the audio processing application program, and push audio for pushing to the live broadcast platform is generated.
In the technical scheme of the application, the third equipment file is deployed on the driving layer, so that the operating system can write the second audio data of at least one audio source externally connected with the live equipment into the third equipment file, and the operating system can read the second audio data from the third equipment file and send the second audio data to the audio processing application program. Therefore, under the condition that the anchor personnel need to utilize the external audio source to carry out live broadcast, the audio processing application program can be ensured to acquire the second audio data of the external audio source, and the anchor personnel can be ensured to normally carry out live broadcast.
Optionally, before generating the push audio for pushing to the live platform, the method further comprises: deploying a first device file and a second device file associated with the first device file in an operating system by using a virtual sound card driver; and associating the first device file with the first microphone device in the operating system.
Specifically, referring to fig. 3, before the audio processing application generates push audio for pushing to the live platform, a virtual sound card driver is deployed at the driver layer of the operating system (specifically, a virtual sound card driver is deployed at the driver layer).
Then, the virtual sound card driver generates a first device file and a second device file. Wherein the first device file is a device file for reading the first audio data and the second device file is a device file for writing the first audio data. The first device file may thus be designated at the HAL layer as a device file corresponding to the first microphone device associated with the live application using the corresponding function.
Therefore, in the technical scheme of the application, under the condition that the live broadcast application program sends a request for acquiring data from the first microphone equipment to the operating system, the operating system reads the first audio data in the first equipment file and sends the first audio data to the live broadcast application program. Therefore, for the anchor person, the live broadcast can be performed only by sending the live broadcast instruction to the live broadcast equipment 100 through the live broadcast application program, so that the technical effect of being convenient for the anchor person to perform live broadcast by using the live broadcast equipment is achieved.
According to the first aspect of the embodiment, the technical effect of ensuring that the anchor personnel can utilize the expected live platform to conduct audio live broadcast is achieved.
Thus, in the live broadcast process, the live broadcast device 100 obtains the second audio data from the external microphone device, and writes the second audio data into the third device file by using the operating system. The operating system then reads the second audio data from the third device file and sends the second audio data to the audio processing application. The audio processing application then generates push audio based on the second audio data and writes the first audio data of the push audio to the second device file. And at the same time, the operating system writes the first audio data in the second equipment file into the first equipment file. Further, the live application sends a request to the operating system to retrieve audio data from the respective associated first microphone device. The operating system then reads the first audio data from the first device file in response to the request and sends the first audio data to the live application. And finally, the live broadcast application program sends the first audio data of the push audio to the corresponding live broadcast platform.
Further, according to a second aspect of the present embodiment, there is provided an audio processing method implemented by the processor 110 shown in fig. 2. Fig. 5 shows a schematic flow chart of the method, and referring to fig. 5, the method includes:
s502: generating push audio for pushing to the live platform; and
S504: and writing the first audio data of the push audio into a first device file configured by the operating system, wherein the first device file corresponds to a first microphone device associated with the live application, and the live application corresponds to the live platform.
Notably, as may be the case with the first aspect of the embodiments of the present application, the live device 100 may have a live application pre-installed, such that the operating system may send the first audio data read from the first device file to the pre-installed live application, and the live application transmits the first audio data to the live platform. Also, as described in the second aspect of the embodiment of the present application, the live device 100 may not install the live application in advance, and may download the live application during the audio live process. For example, the live application may be downloaded after the live device 100 is externally connected to an audio source; the live application may also be downloaded after the live device 100 invokes locally stored audio or accesses network audio via a link address. The present application is not particularly limited herein.
According to the second aspect of the embodiment, the technical effect of ensuring that the anchor personnel can utilize the expected live platform to conduct audio live broadcast is achieved.
Further, referring to fig. 1, according to a third aspect of the present embodiment, there is provided a storage medium. The storage medium includes a stored program, wherein the method of any one of the above is performed by a processor when the program is run.
Therefore, according to the embodiment, the technical effect of ensuring that the anchor personnel can utilize the expected live platform to conduct audio live broadcast is achieved.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present invention is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present invention. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present invention.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
Example 2
Fig. 6 shows an audio live device 600 according to the first aspect of the present embodiment, which device 600 corresponds to the method according to the first aspect of embodiment 1. Referring to fig. 6, the apparatus 600 includes: a first push audio generation module 610, configured to generate push audio for pushing to a live platform; a first push audio writing module 620, configured to write first audio data of push audio into a first device file, where the first device file corresponds to a first microphone device associated with a live application; and a first audio data reading module 630, configured to read, by using a live application program, the first audio data from the first device file via the operating system, and transmit the first audio data to the live platform, where the live application program corresponds to the live platform.
Optionally, the first device file further corresponds to a second microphone device disposed on a virtual sound card of the live device, the second microphone device is a virtual microphone device, and the first push audio writing module 620 includes: the first push audio writing sub-module is used for writing push audio into a second device file, wherein the second device file corresponds to virtual player equipment of the virtual sound card; and a second push audio writing sub-module for calling push audio from the second device file by using the virtual sound card and writing the push audio into the first device file.
Optionally, the first audio data reading module 630 includes: the data request module is used for sending a request for acquiring data from the first microphone equipment to the operating system by utilizing the live broadcast application program; and a data receiving module for receiving, from the operating system, first audio data read by the operating system from the first device file using the live application.
Optionally, the first push audio generation module 610 includes: a data acquisition module for acquiring second audio data from at least one audio source, wherein the at least one audio source includes a third microphone device different from the first microphone device and the second microphone device; and a first push audio generation sub-module for generating push audio with the audio processing application based on the second audio data.
Optionally, the data acquisition module includes: and the second audio data reading module is used for reading the second audio data from a third device file corresponding to the third microphone device by using the operating system.
Optionally, before generating push audio for pushing to the live platform, the apparatus 600 further comprises: the device file deployment module is used for deploying a first device file and a second device file associated with the first device file in an operating system by utilizing a virtual sound card driver; and a device file association module for associating the second device file with the first microphone device in the operating system.
Further, fig. 7 shows an audio processing apparatus 700 according to the second aspect of the present embodiment, the apparatus 700 corresponding to the method according to the second aspect of embodiment 1. Referring to fig. 7, the apparatus 700 includes: a second push audio generation module 710, configured to generate push audio for pushing to the live platform; and a second push audio writing module 720 for writing first audio data of the push audio to a first device file configured by the operating system, wherein the first device file corresponds to a first microphone device associated with the live application, and the live application corresponds to the live platform.
Therefore, according to the embodiment, the technical effect of ensuring that the anchor personnel can utilize the expected live platform to conduct audio live broadcast is achieved.
Example 3
Fig. 8 shows an audio live device 800 according to the first aspect of the present embodiment, which device 800 corresponds to the method according to the first aspect of embodiment 1. Referring to fig. 8, the apparatus 800 includes: generating push audio for pushing to the live platform; writing first audio data of push audio into a first device file, wherein the first device file corresponds to a first microphone device associated with a live application; and reading the first audio data from the first device file by using the live application program through the operating system and transmitting the first audio data to the live platform, wherein the live application program corresponds to the live platform.
Optionally, the first device file further corresponds to a second microphone device disposed on a virtual sound card of the live device, the second microphone device is a virtual microphone device, and the operation of writing the first audio data of the push audio into the first device file includes: writing the push audio into a second device file, wherein the second device file corresponds to the virtual player device of the virtual sound card; and invoking push audio from the second device file using the virtual sound card and writing the push audio to the first device file.
Optionally, the operation of reading the first audio data from the first device file via the operating system with the live application includes: transmitting a request to acquire data from the first microphone device to an operating system using a live application; and receiving, with the live application, first audio data from the operating system read by the operating system from the first device file.
Optionally, the operation of generating push audio for pushing to the live platform includes: obtaining second audio data from at least one audio source, wherein the at least one audio source includes a third microphone device different from the first microphone device and the second microphone device; and generating push audio with the audio processing application based on the second audio data.
Optionally, the operation of obtaining second audio data from at least one audio source comprises: and reading the second audio data from a third device file corresponding to the third microphone device by using the operating system.
Optionally, before generating the push audio for pushing to the live platform, the method further comprises: deploying a first device file and a second device file associated with the first device file in an operating system by using a virtual sound card driver; and associating the second device file with the first microphone device in the operating system.
Further, fig. 9 shows an audio processing apparatus 900 according to the second aspect of the present embodiment, the apparatus 900 corresponding to the method according to the second aspect of embodiment 1. Referring to fig. 9, the apparatus 900 includes: generating push audio for pushing to the live platform; and writing first audio data of the push audio to a first device file configured by the operating system, wherein the first device file corresponds to a first microphone device associated with the live application and the live application corresponds to the live platform.
Therefore, according to the embodiment, the technical effect of ensuring that the anchor personnel can utilize the expected live platform to conduct audio live broadcast is achieved.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present invention, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, such as the division of the units, is merely a logical function division, and may be implemented in another manner, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, randomAccess Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.

Claims (10)

1. An audio live method for a live device, comprising:
generating push audio for pushing to the live platform;
Writing first audio data of the push audio into a first device file, wherein the first device file corresponds to a first microphone device associated with a live application; and
And reading the first audio data from the first device file by using the live application program through the operating system, and transmitting the first audio data to the live platform, wherein the live application program corresponds to the live platform.
2. The method of claim 1, wherein the first device file further corresponds to a second microphone device disposed on a virtual sound card of the live device, the second microphone device being a virtual microphone device, and the operation of writing the first audio data of the push audio to the first device file comprises:
writing the push audio into a second device file, wherein the second device file corresponds to a virtual player device of the virtual sound card; and
And calling the push audio from the second device file by using the virtual sound card, and writing the push audio into the first device file.
3. The method of claim 2, wherein the operation of reading the first audio data from the first device file via the operating system with the live application comprises:
Transmitting a request to the operating system to obtain data from the first microphone device using the live application; and
The first audio data read by the operating system from the first device file is received from the operating system with the live application.
4. The method of claim 3, wherein generating push audio for pushing to a live platform comprises:
Obtaining second audio data from at least one audio source, wherein the at least one audio source includes a third microphone device different from the first microphone device and the second microphone device; and
The push audio is generated with an audio processing application based on the second audio data.
5. The method of claim 1 or 4, wherein the operation of obtaining second audio data from at least one audio source comprises:
and reading second audio data from a third device file corresponding to the third microphone device by using the operating system.
6. The method of claim 1, further comprising, prior to generating push audio for pushing to the live platform:
Deploying the first device file and a second device file associated with the first device file in the operating system using a virtual sound card driver; and
The second device file is associated with the first microphone device in the operating system.
7. An audio processing method for a live broadcast device, comprising:
generating push audio for pushing to the live platform; and
And writing the first audio data of the push audio into a first device file configured by an operating system, wherein the first device file corresponds to a first microphone device associated with a live application program, and the live application program corresponds to the live platform.
8. A storage medium comprising a stored program, wherein the method of any one of claims 1 to 7 is performed by a processor when the program is run.
9. An audio live broadcast apparatus, comprising:
The first push audio generation module is used for generating push audio for pushing to the live broadcast platform;
A first push audio writing module, configured to write first audio data of the push audio into a first device file, where the first device file corresponds to a first microphone device associated with a live application; and
And the first audio data reading module is used for reading the first audio data from the first equipment file by using the live broadcast application program through the operating system and transmitting the first audio data to the live broadcast platform, wherein the live broadcast application program corresponds to the live broadcast platform.
10. An audio live broadcast apparatus, comprising:
a first processor; and
A first memory, coupled to the first processor, for providing instructions to the first processor to process the following processing steps:
generating push audio for pushing to the live platform;
Writing first audio data of the push audio into a first device file, wherein the first device file corresponds to a first microphone device associated with a live application; and
And reading the first audio data from the first device file by using the live application program through the operating system, and transmitting the first audio data to the live platform, wherein the live application program corresponds to the live platform.
CN202410176514.6A 2024-02-08 2024-02-08 Audio live broadcast and audio processing method, device and storage medium Pending CN118101629A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410176514.6A CN118101629A (en) 2024-02-08 2024-02-08 Audio live broadcast and audio processing method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410176514.6A CN118101629A (en) 2024-02-08 2024-02-08 Audio live broadcast and audio processing method, device and storage medium

Publications (1)

Publication Number Publication Date
CN118101629A true CN118101629A (en) 2024-05-28

Family

ID=91162475

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410176514.6A Pending CN118101629A (en) 2024-02-08 2024-02-08 Audio live broadcast and audio processing method, device and storage medium

Country Status (1)

Country Link
CN (1) CN118101629A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080268961A1 (en) * 2007-04-30 2008-10-30 Michael Brook Method of creating video in a virtual world and method of distributing and using same
CN107659831A (en) * 2017-05-19 2018-02-02 腾讯科技(北京)有限公司 Media data processing method, client and storage medium
CN110662082A (en) * 2019-09-30 2020-01-07 北京达佳互联信息技术有限公司 Data processing method, device, system, mobile terminal and storage medium
CN111314724A (en) * 2020-02-18 2020-06-19 华为技术有限公司 Cloud game live broadcasting method and device
CN116095397A (en) * 2023-01-18 2023-05-09 杭州星犀科技有限公司 Live broadcast method, live broadcast device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080268961A1 (en) * 2007-04-30 2008-10-30 Michael Brook Method of creating video in a virtual world and method of distributing and using same
CN107659831A (en) * 2017-05-19 2018-02-02 腾讯科技(北京)有限公司 Media data processing method, client and storage medium
CN110662082A (en) * 2019-09-30 2020-01-07 北京达佳互联信息技术有限公司 Data processing method, device, system, mobile terminal and storage medium
CN111314724A (en) * 2020-02-18 2020-06-19 华为技术有限公司 Cloud game live broadcasting method and device
CN116095397A (en) * 2023-01-18 2023-05-09 杭州星犀科技有限公司 Live broadcast method, live broadcast device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN102474671B (en) Information processing system and information processor
US20110131293A1 (en) Data delivery apparatus and data delivery method
CN102227720B (en) Supporting media content revert functionality across multiple devices
JP6006749B2 (en) Method and system for providing incoming call notification using video multimedia
US20220214932A1 (en) Methods, devices and computer storage media for inter-mini program platform communication
CN112749022B (en) Camera resource access method, operating system, terminal and virtual camera
US20070118499A1 (en) Method and system for collecting and restoring application state information
CN110046000B (en) Applet running method and device
CN102427465A (en) Voice service proxy method and device and system for integrating voice application through proxy
WO2014182881A1 (en) System and method for recording music which allows asynchronous collaboration over the internet
CN101720478A (en) High availability transport
TW200417831A (en) Method for setting playback environment of an interactive disk
CN114422460A (en) Method and system for establishing same-screen communication sharing in instant messaging application
CN102664929A (en) Mobile terminal and method for managing mass storage device
US20230367572A1 (en) Patch Package Installation Method and Apparatus
TWI357748B (en) System and method for correlating messages within
JPWO2011152428A1 (en) Information device data linkage system, authentication device, client device, information device data linkage method and program
CN118101629A (en) Audio live broadcast and audio processing method, device and storage medium
US20120196582A1 (en) Method and apparatus for configuring caller identification multimedia contents
CN117793449B (en) Video live broadcast and video processing method, device and storage medium
JP4589645B2 (en) Recovering from an access violation caused by an audio processing object
CN104714760A (en) Method and device for read and write storage device
CN113268272A (en) Application delivery method, device and system based on private cloud
US9094426B2 (en) Method for telecommunications device synchronization
CN113747200B (en) Video processing method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination