WO2021164043A1 - 音视频传输装置及音视频传输系统 - Google Patents

音视频传输装置及音视频传输系统 Download PDF

Info

Publication number
WO2021164043A1
WO2021164043A1 PCT/CN2020/076673 CN2020076673W WO2021164043A1 WO 2021164043 A1 WO2021164043 A1 WO 2021164043A1 CN 2020076673 W CN2020076673 W CN 2020076673W WO 2021164043 A1 WO2021164043 A1 WO 2021164043A1
Authority
WO
WIPO (PCT)
Prior art keywords
audio
video
signal
video transmission
transmission device
Prior art date
Application number
PCT/CN2020/076673
Other languages
English (en)
French (fr)
Inventor
刘德志
马强
Original Assignee
深圳市昊一源科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市昊一源科技有限公司 filed Critical 深圳市昊一源科技有限公司
Priority to US17/799,973 priority Critical patent/US11997420B2/en
Publication of WO2021164043A1 publication Critical patent/WO2021164043A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/04Systems for the transmission of one television signal, i.e. both picture and sound, by a single carrier
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/16Vocoder architecture
    • G10L19/167Audio streaming, i.e. formatting and decoding of an encoded audio signal representation into a data stream for transmission or storage purposes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42203Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] sound input device, e.g. microphone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • H04N21/4363Adapting the video stream to a specific local network, e.g. a Bluetooth® network
    • H04N21/43637Adapting the video stream to a specific local network, e.g. a Bluetooth® network involving a wireless protocol, e.g. Bluetooth, RF or wireless LAN [IEEE 802.11]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • H04N5/92Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • H04N5/92Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N5/9201Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving the multiplexing of an additional signal and the video signal
    • H04N5/9202Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving the multiplexing of an additional signal and the video signal the additional signal being a sound signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/08Mouthpieces; Microphones; Attachments therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2420/00Details of connection covered by H04R, not provided for in its groups
    • H04R2420/07Applications of wireless loudspeakers or wireless microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W84/00Network topologies
    • H04W84/02Hierarchically pre-organised networks, e.g. paging networks, cellular networks, WLAN [Wireless Local Area Network] or WLL [Wireless Local Loop]
    • H04W84/10Small scale networks; Flat hierarchical networks
    • H04W84/12WLAN [Wireless Local Area Networks]

Definitions

  • This specification relates to the field of communication technology, and in particular to an audio and video transmission device and an audio and video transmission system.
  • the video signal of the subject and the audio signal that needs to be played synchronously with the video signal need to be collected at the same time, and then the video signal and audio signal Synchronous alignment processing, the final audio and video mixed signal is obtained and transmitted to the video receiving end.
  • the video signal of the subject can be collected by the camera device, and at the same time the audio signal can be collected by the built-in microphone of the camera device, and the audio and video mixed signal generated after synchronization processing is sent to the video via wireless transmission technology.
  • the built-in microphone is only suitable for scenes where the audio signal source and the camera device are close to each other. If the two are far apart, the built-in microphone will have a poor collection effect. In addition, noise is generated by the motor and fan on the camera device. Interference, the quality of the collected audio signal cannot be guaranteed.
  • the implementation of this specification provides an audio and video transmission system and an audio and video transmission device.
  • an audio and video transmission system including an audio and video transmission device and at least one wireless microphone transmitting device;
  • the wireless microphone transmitting device is used to send the audio signal collected by the wireless microphone to the audio and video transmission device;
  • the audio and video transmission device is used to connect with an external video capture device and the wireless microphone transmitter, respectively, to receive the audio signal, transmit the audio signal to the video capture device, and from the video capture device
  • the mixed signal generated by the audio signal and the video signal is acquired, and the mixed signal is processed and output.
  • an audio and video transmission device is provided, the audio and video transmission device is respectively connected to an external video capture device and a wireless microphone transmitter, and the audio and video transmission device includes:
  • An audio transmission unit configured to receive the audio signal collected by the wireless microphone from the wireless microphone transmitting device, and transmit the audio signal to the video collecting device;
  • the video transmission unit is configured to obtain the mixed signal of the audio signal and the video signal from the video acquisition device, and output the mixed signal.
  • the audio signal of the audio signal source is collected by the wireless microphone physically separated from the video collecting device.
  • the collected audio signal is not directly transmitted by the wireless microphone to the video collecting device, but using an audio and video transmission system Transmission:
  • the audio and video transmission system has a wireless microphone transmitter and an audio and video transmission device.
  • the audio signal is sent from the wireless microphone transmitter to the audio and video transmission device, and then the audio and video transmission device is transmitted to the video acquisition device.
  • the video capture device can still complete the synchronization processing of the audio signal and the video signal, generate the audio and video mixed signal, and output the mixed signal to the audio and video transmission device.
  • the solution of the embodiment of the present specification can realize the audio signal collection and the transmission of the audio signal and the audio-video mixed signal of the shooting scene where the audio signal source and the video collection device are far away.
  • Collecting audio signals by placing the microphone at a location convenient for collecting audio signal sources can improve the quality of the collected audio signals and ensure the final collected audio and video effects.
  • the audio signal and the audio and video mixed signal transmission are realized at the same time, which solves the transmission problem of the audio signal and the audio and video mixed signal in the shooting scene where the audio signal source and the video capture device are far away.
  • the realization of the transmission of the two signals also simplifies the installation process of the equipment during the shooting process.
  • Fig. 1 is a schematic diagram of an audio and video transmission system according to an embodiment of this specification.
  • FIGS 2-6 are schematic diagrams of the structure of an audio and video transmission device according to an embodiment of this specification.
  • Fig. 7 is a schematic structural diagram of an audio transmission unit according to an embodiment of the present specification.
  • Fig. 8 is a schematic structural diagram of a video transmission unit according to an embodiment of the present specification.
  • Fig. 9 is a schematic diagram of an audio and video transmission system according to an embodiment of this specification.
  • FIG. 10 is a schematic diagram of the structure of a wireless microphone according to an embodiment of the present specification.
  • FIG. 11 is a schematic diagram of the structure of an audio and video transmission device according to an embodiment of this specification.
  • FIG. 12 is a schematic diagram of the structure of a wireless video receiving terminal according to an embodiment of the present specification.
  • first, second, third, etc. may be used in this specification to describe various information, the information should not be limited to these terms. These terms are only used to distinguish the same type of information from each other.
  • first information may also be referred to as second information, and similarly, the second information may also be referred to as first information.
  • word “if” as used herein can be interpreted as "when” or “when” or "in response to determination”.
  • the video signal can be simultaneously collected from the subject and audio signal from the audio signal source through the camera device, and the audio signal and the video signal are synchronized to obtain the audio and video mixed signal, and Output to the video receiver.
  • the video receiving end is far away from the camera device. Therefore, the audio and video mixed signal collected by the camera device can be sent to the video receiving end through wireless transmission technology so that it can be viewed remotely at the video receiving end.
  • the camera device collects the audio signal of the photographed object, it mostly uses the built-in microphone of the camera device to collect it. This method is more suitable when the camera device is close to the audio signal source. When it is far away, the quality of the collected audio signal is poor, which affects the playback effect.
  • the embodiments of this specification provide an audio and video transmission device and an audio and video transmission system, which are used to realize the collection of audio signals and the audio signal and audio and video mixed signals in the shooting scene where the audio signal source and the video acquisition device are far away. Transmission.
  • the audio and video transmission system 10 may include an audio and video transmission device 11 and a wireless microphone transmitting device 12.
  • the number of wireless microphone transmitting devices 12 is not limited, and two wireless microphone transmitting devices 12 are shown as an example in FIG. 1.
  • a wireless microphone transmitting device 12 is connected to a wireless microphone 13 for sending the audio signal of the audio signal source collected by the wireless microphone 13 to the audio and video transmission device 11.
  • the audio and video transmission device 11 is also connected to a video capture device 14, and the video capture device 14 is used to capture the video signal of the photographed object.
  • the audio and video transmission device 11 sends the audio signal received from the wireless microphone transmitter 12 to the video capture device 14, so that the video capture device 14 synchronizes the audio signal and the video signal of the subject to generate a mixed audio and video signal (hereinafter referred to as Is a mixed signal), and output the mixed signal to the audio and video transmission device 11, so that the audio and video transmission device 11 outputs the mixed signal.
  • Is a mixed signal a mixed audio and video signal
  • the audio signal source may be the subject, or other sound sources that need to be played synchronously with the subject’s video (such as the voice of the narrator who explains the video) and many more.
  • the wireless microphone 13 can be placed close to the audio signal source.
  • the wireless microphone 13 can be carried in the narrator’s Body.
  • the wireless microphone transmitter 12 and the wireless microphone 13 may be designed as an integrated design, for example, they may be designed as a handheld type or a small integrated lavalier type.
  • the wireless microphone transmitter 12 can send the collected audio signals to the audio and video transmission device 11 through various wireless transmission technologies, such as Bluetooth, WIFI, Zigbee and other wireless transmission technologies. Considering the transmission distance and transmission effect, in shooting scenes where the audio signal source and the video capture device 14 are far away, WIFI transmission technology is usually selected.
  • the collected audio signal can be modulated by digital modulation, and then the modulated audio signal can be processed through the 1.9GHz DECT (Digital Enhanced Cordless Telecommunications Digital Enhanced Cordless Telecommunications), 2.4GHz ISM or 5GHz ISM frequency band. transmission.
  • one audio and video transmission device 11 can be connected to at least two wireless microphone transmitters 12 at the same time, and each wireless microphone transmitter 12 sends the audio signals collected by the wireless microphone 13 connected to it separately ⁇ AUDIO AND VIDEO TRANSMISSION DEVICE11.
  • a TDMA (Time Division Multiple Access) transmission method can be used. Different wireless microphone transmitters 12 occupy different time slots, and multiple wireless microphones transmit The device 12 and the audio and video transmission device 11 only occupy one independent wireless channel during data transmission.
  • the audio and video transmission device 11 and the video capture device 14 may be physically connected or wirelessly connected.
  • the audio and video transmission device 11 can be placed close to the video capture device 14.
  • the video capture device 14 is designed with a physical interface connected to the audio and video transmission device 11, and the two can be connected through the physical interface.
  • the audio and video transmission device 11 can be fixedly installed on the video acquisition device 14.
  • the audio and video transmission device 11 After the audio and video transmission device 11 receives the audio signal sent by the wireless microphone transmitter 12, it can input the audio signal to the video acquisition device 14.
  • the video acquisition device 14 synchronizes the received audio signal with the collected video signal to generate The audio signal is used to accompany the mixed signal of the video signal, and then the mixed signal is output to the audio and video transmission device 11, and the audio and video transmission device 11 processes the mixed signal and outputs it.
  • the audio and video transmission device 11 may have components such as a display screen, and decode the mixed signal and output it to the display interface for the user to watch.
  • the audio and video transmission device 11 may also send the mixed signal to other video receiving terminals so that the user can watch the video through the video receiving terminal.
  • the director needs to remotely monitor the shooting effect.
  • the mixed signal can be sent to the remote monitoring equipment, so that the director can monitor the shooting effect.
  • the audio and video transmission device 11 can also output the mixed signal to the video receiving terminal through wireless transmission technology, such as Bluetooth, WIFI, Zigbee and other wireless transmission technologies.
  • WIFI transmission can be selected.
  • the video receiving end can be a smart terminal with universal WIFI connection function and universal H.264/H.265 video decoding capabilities, such as personal terminals such as mobile phones and computers, or it can be a more powerful hardware H.264/H.265 decoding A standalone device with higher capabilities and stronger wireless WIFI performance.
  • the audio signal of the audio signal source is collected through the wireless microphone 13, and then sent to the audio and video transmission device 11 through the wireless microphone transmitter 12, and the audio and video transmission device 11 inputs the audio signal to the video capture Device 14, the video acquisition device 14 synchronizes the audio signal and the collected video signal to generate a mixed signal, then outputs the mixed signal to the audio and video transmission device 11, and the audio and video transmission device 11 then outputs the mixed signal.
  • the audio signal of the audio signal source can be collected through the external wireless microphone 13 to improve the quality of the audio signal, and the audio and video signal can be transmitted through the self-designed audio and video transmission device 11, and the audio signal is input to the video acquisition device 14 to achieve The audio signal is synchronized with the video signal to generate a mixed audio and video signal.
  • the audio and video transmission device 11 can also realize the transmission of the mixed signal at the same time, and send the audio and video mixed signal to the video receiving end.
  • the transmission of audio signals and mixed audio and video signals is realized at the same time, which not only facilitates the installation of equipment in the shooting process, but also solves the problem of audio collection and transmission in remote shooting scenes, and improves the final collection of audio and video. Effect.
  • the audio and video transmission device 11 may include a video transmission unit 110 and an audio transmission unit 111.
  • the audio transmission unit 111 receives the audio signal transmitted by the wireless microphone transmitter 12 through a wireless transmission channel, and transmits it To the video capture device 14, the video transmission unit 110 sends the audio signal output by the video capture device 14 and the synchronized mixed signal of the video signal to the video receiving terminal 15 through the wireless transmission channel.
  • the audio and video transmission device 11 can transmit audio signals and mixed signals independently.
  • the audio and video transmission device 11 can include a video transmission unit 110, an audio transmission unit 111, and accessories. 112 and multiple antennas (113 and 114 in the figure).
  • Accessories 112 may include batteries, buttons, displays, housings, and so on.
  • the video transmission unit 110 and the audio transmission unit 111 each adopt independent wireless transmission channels and antennas, and transmit data through their respective wireless transmission channels.
  • the audio transmission unit 111 may receive the audio signal sent by the wireless microphone transmitter 12 through the antenna 114 and output it to the video acquisition device 14.
  • the video transmission unit 110 acquires the mixed signal from the video acquisition device 14, and sends the mixed signal to the video acquisition device 14 through the antenna 113.
  • the video transmission unit 110 works at UNII 5GHz WIFI
  • the audio transmission unit 111 works at UHF (Ultra High Frequency) or 2.4 GHz.
  • the audio and video transmission device 11 may further include a dual-frequency duplex filter 116 and a dual-frequency antenna 115, which are implemented by a dual-frequency duplex filter 116.
  • the radio frequency is combined and then transmitted through the dual-frequency antenna 115 in order to reduce the number of antennas.
  • the audio signal and the mixed signal each use independent transmission channels, it not only occupies wireless transmission channel resources, for example, at least two wireless channels are occupied, and the transmission of the two signals will also cause mutual interference. In order to reduce the occupation of wireless transmission channel resources, and to reduce the interference between the two signal transmissions.
  • the audio and video transmission device 11 may use the same wireless channel to transmit audio signals and mixed signals in a time division multiplexing manner, that is, the audio signals and mixed signals may use a unified wireless transmission technology, for example, they may be unified.
  • Adopt WIFI IEEE 802.11 transmission technology for example, they may be unified.
  • wireless video signal transmission is characterized by high bit stream bandwidth, but can accept a longer time delay
  • wireless microphone audio signal is characterized by low bit stream bandwidth, but the requirements for delay are relatively strict, so the two signals are passed
  • the same wireless channel transmission can not only take into account the characteristics of wireless microphone audio signal bandwidth, low power consumption, and low delay parameters, but also ensure the high transmission bandwidth requirements of wireless video signals, depending on what is needed.
  • audio signals and mixed signals can be transmitted through the 2.4GHz, 5GHz UNII or 60GHz ISM frequency bands, which can ensure the high-rate bandwidth requirements for wireless video transmission, and the less interference 5GHz UNII and 60GHz ISM frequency bands can also be used for real-time audio signals. Transmission provides more reliable protection.
  • both the audio signal and the mixed signal are transmitted using WIFI technology, and the two signals can share the WIFI transmission channel.
  • the audio and video transmission device may also include a WIFI transmission unit 117.
  • the video transmission unit 110 and the audio transmission unit 111 may share the WIFI transmission unit 117.
  • the audio transmission unit 111 receives from the wireless microphone transmitter 12 through the WIFI transmission unit 117.
  • the video transmission unit 110 sends the mixed signal to the video receiving terminal 15 through the WIFI transmission unit 117.
  • the audio and video transmission device 11 further includes a central processing unit 118, and the central processing unit 118 is configured to receive a mixed signal from the video transmission unit 110, and respectively perform processing on the audio and video signals in the mixed signal.
  • the video signal is compressed and encoded and output to the WIFI transmission unit 117, so that the WIFI transmission unit 117 transmits to the video receiving terminal 15, and receives the audio signal sent by the wireless microphone transmitter 12 from the WIFI transmission unit 117, converts it into a digital compressed audio signal and sends it ⁇ audio transmission unit 111.
  • the WIFI transmission unit 117 in the audio and video receiving device 11 After the WIFI transmission unit 117 in the audio and video receiving device 11 receives the audio signal from the wireless microphone transmitter 12, it can first send the audio signal to the central processing unit 118, and the central processing unit 118 converts the audio signal into a digital compressed audio signal.
  • the format of the obtained digital compressed audio signal is the same as the format of the audio signal sent by the wireless microphone transmitter 12, and may be one encoded by an open source encoding algorithm such as ADPCM, Opus-CELT, etc.
  • the central processing unit 118 then sends the digital compressed audio signal to the audio transmission unit 110, so that the audio transmission unit 110 obtains an analog audio signal after further processing, and then outputs the analog audio signal to the video acquisition device 14.
  • the video acquisition device 14 synchronously processes the audio signal input by the audio transmission unit 111 with the collected video signal, generates an audio and video mixed signal, and outputs the mixed signal to the video transmission unit 110, and the video transmission unit 110 sends the mixed signal to the central office.
  • the processing unit 118 so that the central processing unit 118 respectively compresses and encodes the audio signal in the mixed signal and the video signal in the mixed signal to obtain the encoded mixed signal, and then outputs the encoded mixed signal to the WIFI transmission unit 117, through The WIFI transmission unit 117 sends to the video receiving terminal 15.
  • the central processing unit 118 may be an SOC chip that integrates a CPU and H.264/H.265 coding and decoding functions.
  • the audio transmission unit 111 may include an audio processing subunit 111a, a sampling subunit 111b, and a first interface subunit 111c.
  • the audio processing subunit 111a is used to obtain the central processing unit. 118
  • the converted digital compressed audio signal and decode the digital compressed audio signal. At the same time, it can also complete the processing of equalization, sound effect, noise reduction, and volume level adjustment of the digital compressed audio signal.
  • the sampling subunit 111b is used to sample and recover the decoded digital compressed audio signal to obtain an analog audio signal.
  • the sampling subunit 111b may be a multi-channel audio sampling DAC (Digital to Analog Converter: digital-to-analog conversion). ⁇ ).
  • the first interface subunit 111c is configured to output the analog audio signal obtained by sampling and recovery to the video acquisition device 14, so that the video acquisition device 14 synchronizes the audio signal obtained by the sampling and recovery with the collected video signal.
  • the first interface subunit 111c includes a multi-channel amplification processor, which is used to perform multi-channel audio amplification processing on the analog audio recovered from sampling to output analog audio whose amplitude meets the level requirements of the external microphone input interface of the video acquisition device 14 Signal.
  • the video transmission unit 110 may include a second interface sub-unit 110a and a video signal processing sub-unit 110b.
  • the audio signal and the video signal in the mixed signal are separated, and the audio signal in the mixed signal is transmitted to the central processing unit 118 for compression and encoding processing.
  • the video signal processing subunit 110b is used to process the video signal and send it to The central processing unit 118 performs compression coding processing.
  • the second interface subunit 110a includes an SDI video receiver, an HDMI video receiver, and the video capture device 14 outputs interface signals of SDI and HDMI standards, and the interface signals are mixed signals including audio signals and video signals.
  • the video signal in the mixed signal is converted into a standard digital parallel video signal by the SDI video receiver and HDMI video receiver, the signal format is BT1120 or MIPI, and then sent to the video processing sub-unit 110b, the video processing sub-unit 110b can be FPGA chip, the FPGA chip selects a video signal and sends it to the central processing unit 118, so that the central processing unit 118 performs compression and encoding processing on the video signal.
  • the audio signal in the mixed signal is directly output to the central processing unit 110, and is compressed and encoded separately from the video signal.
  • the audio and video transmission device 11 further includes a CCU (Camera Control Unit, camera control unit) control interface, and the central processing unit 118 controls the CCU of the video capture device 14 through the CCU control interface, so that The adjustment of the lens, image parameters, and auxiliary data of the video capture device 14 is completed.
  • a CCU Camera Control Unit
  • the central processing unit 118 controls the CCU of the video capture device 14 through the CCU control interface, so that The adjustment of the lens, image parameters, and auxiliary data of the video capture device 14 is completed.
  • the audio and video transmission device 11 further includes an expansion interface connected to the central processing unit 118 for connecting one or more of the following accessories: LCD screen, buttons, and power supply.
  • the audio and video transmission device 11 may include accessories such as a screen, a button, and a power supply. These accessories are connected to the central processing unit 118 through an expansion interface, so that the central processing unit 118 can control these accessories.
  • the audio and video transmission device 11 can send it to other video receiving terminals 15.
  • the mixed audio and video signals collected by the video acquisition device 14 need to be sent to the video receiving terminal 15 so that directors, lighting engineers and other staff can make corresponding adjustments according to the effects of the video.
  • the audio and video transmission system 10 may also include an APP installed on the electronic device, as the video receiving terminal 15, the audio and video transmission device 11 can transmit the audio and video signals The synchronized mixed signal is sent to the APP so that the user can view the captured audio and video through the APP.
  • the electronic device may be a user's terminal device such as a mobile phone, a tablet, a computer, etc.
  • the user installs a designated APP in the electronic device in advance, and receives the audio and video signal synchronized mixed signal from the audio and video transmission device 11 through the APP.
  • the audio and video transmission device 11 can be connected to the wireless microphone transmitter 12 and the electronic equipment installed with APP through a WIFI channel, and the working mode of the WIFI transmission unit of the audio and video transmission device 11 is AP mode, and the wireless microphone The working mode of the WIFI transmission unit of the transmitting device 12 and the electronic equipment installed with the APP is the Station mode.
  • the audio and video transmission device 11 can turn on the WIFI hotspot, the wireless microphone transmitter 12 and the electronic device installed with APP can search the SSID of the audio and video transmission device 11, and establish a WIFI connection with the audio and video transmission device 11 through a preset password. Then the wireless microphone transmitter 12 can transmit audio signals to the audio and video transmission device 11 through the established WIFI connection channel, and the electronic equipment installed with APP can receive the mixed signal from the audio and video transmission device 11 through the established WIFI connection channel.
  • the audio signal of the wireless microphone used in the video shooting field generally requires a very low transmission delay, usually less than 10ms, and has a transmission distance of 50 to hundreds of meters.
  • the reason for the stricter delay requirements for the transmission of the wireless microphone audio signal is that if the delay is severe, the sound and the image will not be synchronized, which will seriously affect the final video effect. Therefore, it is critical to reduce the transmission delay of the wireless microphone audio signal.
  • the audio signal transmitted by the wireless microphone 13 is delayed mainly due to the following aspects: First, after the wireless microphone 13 collects the audio signal from the audio signal source, the wireless microphone transmitter 12 usually needs to compress and encode the audio signal. Then it is sent to the audio and video transmission device 11 through the wireless channel. Generally, the audio signal can be encoded frame by frame.
  • the wireless microphone transmitter 12 will also produce a certain delay while waiting for the collected audio data to reach the length of one frame, and this delay is related to the frame length of the audio data.
  • the process of compressing the audio signal takes a certain amount of time, which will cause some delays.
  • the audio and video transmission device 11 receives the audio signal, it needs to be decoded, and the decoding process will also cause a certain delay.
  • the audio signal is sent in time slots, a certain air interface time slot delay will occur between the time slots.
  • the audio signal is sent from the wireless microphone transmitter 12 to the audio and video transmission device 11, which will cause a certain transmission and reception delay.
  • a low-latency and low-loss compression technology may be used to compress and encode the audio signal.
  • the loss compression technology can be one of OPUS-CELT, ADPCM (Adaptive Differential Pulse Code Modulation: Adaptive Differential Pulse Code Modulation), LC3, and LC3plus.
  • the frame length of each compressed audio signal can be set by yourself, and for audio signals with a short frame structure, after compression, The quality of the audio signal can also be guaranteed. Therefore, in some embodiments, in order to reduce the delay caused by the above-mentioned first aspect, the frame length of the audio signal collected by the wireless microphone 13 can be configured to be less than a specified length, and the frame length of the audio signal adopts a short frame structure as much as possible. For example, it is less than 2.5ms in order to reduce the waiting delay caused by the intra-frame buffering of the audio signal.
  • the specified length can be set according to the actual situation, for example, it can be set to the minimum frame length supported by the aforementioned low-latency and low-loss compression technology.
  • the audio signal transmission and the mixed signal transmission share a wireless transmission channel, and the audio signal requires higher delay than the audio and video mixed signal, in order to reduce the transmission delay of the audio signal, the audio signal is transmitted
  • the priority is higher than the priority of the mixed signal being transmitted, which ensures that the wireless microphone audio signal is transmitted first, thereby reducing its transmission and reception delay and improving the reliability of transmission.
  • some control commands are also transmitted.
  • the transmission priority of the audio signal can be set to The highest, to ensure that the audio signal is transmitted first.
  • the wireless microphone transmitter 12 and the audio and video transmission device 11 both use WIFI transmission technology.
  • the WIFI transmission unit of the wireless microphone transmitter 12 and the audio and video transmission device 11 can support the WMM QOS mechanism, and the WMM QOS mechanism is used to set the transmission signal priority.
  • WIFI technology consumes more power.
  • the wireless microphone transmitter 12 uses WIFI technology to transmit audio signals to the audio and video transmission device 11. Both the wireless microphone transmitter 12 and the audio and video transmission device 11 include a WIFI transmission unit, in order to reduce the cost of the wireless microphone transmitter 12. Power consumption, the WIFI transmission unit of the wireless microphone transmitter 12 can alternately work in the sleep state and the awakened state, and the sleep state has lower power consumption. The WIFI transmission unit of the wireless microphone transmitter 12 does not have to be in the awake state all the time, which can reduce power consumption.
  • the WIFI transmission unit of the wireless microphone transmitter 12 switches from the dormant state to the awakened state at a preset time interval, and after the transmission task is executed in the awakened state this time, it switches from the awakened state to the wake-up state.
  • the transmission task may refer to tasks such as sending audio signals collected by the wireless microphone 13 to the audio and video transmission device 11 and receiving control instructions from the audio and video transmission device 11.
  • the WIFI transmission unit of the wireless microphone transmitter 12 can wake up from the dormant state every 3ms. After waking up, first send the audio signal to the audio and video transmission device 11, and then receive the control command sent by the audio and video transmission device 11, and then again after completion. Enter sleep mode.
  • the key to saving the power of the WIFI transmission unit is to reduce the duration of the transmission time slot as much as possible. Therefore, the duration of the transmission time slot can be reduced from the following aspects: First, reduce the transmission data rate, such as using low-latency compression coding technology for audio The signal is encoded, which can reduce the sending data rate by more than 4 times. Second, increase the air interface transmission rate, such as working at a higher MCS (Modulation and Coding Scheme) rate configuration. Third, reduce the duration of the receiving time slot.
  • MCS Modulation and Coding Scheme
  • the audio and video transmission device 11 can buffer these control commands first, wait for the wireless microphone transmitter 12 to be awakened, and then send it to the wireless microphone transmitter 12, which can also reduce the receiving time slot time of the wireless microphone transmitter 12.
  • the WIFI transmission unit of the audio and video transmission device 11 can use the downlink data packet buffer technology in the WMM QOS power saving mechanism, and the audio and video transmission device 11 can use the control command (ie, the downlink data packet) sent to the wireless microphone 13 Triggered transmission.
  • the control instruction is buffered.
  • the trigger frame it means that the wireless microphone transmitter 12 is in the awake state at this time. Then, the control instruction can be sent to the wireless microphone transmitting device 12.
  • the wireless microphone transmitter 12 can continuously switch between working states such as transmitting-receiving-sleeping-wake-up. It can be seen that the wireless microphone transmitter 12 has the characteristic of being periodically awakened. Because the real-time requirements of the audio signal are higher, the wireless microphone transmitter 12 can immediately send the audio signal to the audio and video transmission device 11 after being awakened, while for the audio and video transmission device 11 The control commands sent by 11 are small in data volume and low real-time requirements. Therefore, when the wireless microphone transmitter 12 is sleeping, the audio and video transmission device 11 can buffer the control signal, and then send it collectively after the wireless microphone transmitter 12 is awakened. . After the wireless microphone transmitter 12 wakes up, it will first send an audio signal, and then receive the control command sent by the audio and video transmission device 11. In this way, the power consumption of the wireless microphone transmitter 12 can be greatly reduced.
  • the WIFI transmission unit of the wireless microphone transmitter 12 is configured in the single-input single-out mode (SISO: single input single out mode), and single The antenna transmits and receives data to reduce the number of antenna transmitting and receiving channels and achieve the purpose of saving power consumption.
  • SISO single input single out mode
  • the photographer In film and television shooting and commercial video shooting scenes, the photographer usually collects the video of the subject through a camera device, and then sends it to the director and other staff through wireless transmission technology to remotely monitor the shooting effect. During the shooting process, the video signal and audio signal of the subject can be collected at the same time.
  • the camera device has a built-in microphone to receive the audio signal of the subject, but when the subject is far away from the camera device, the built-in microphone is used for radio The effect is poor, which seriously affects the effect of the final collected audio and video.
  • the embodiments of this specification provide an audio and video transmission system, which can well solve the problem of audio signal collection and transmission in shooting scenes where the subject is far away from the camera device.
  • the audio and video transmission system includes wireless microphones 91, wherein the number of wireless microphones 91 is not limited, and two are taken as an example in the figure.
  • the number of wireless video receiving terminals 93 is also not limited.
  • the wireless video receiving terminal can be a smart terminal with universal WIFI connection function and universal H.264/H.265 video decoding capability. It can also be an independent device with stronger hardware H.264/H.265 decoding capabilities and stronger wireless WIFI performance.
  • the wireless microphone 91 can adopt the integrated design of the microphone body and the wireless microphone transmitter in the above-mentioned embodiment. For example, it can be designed as a miniature lavalier model and carried by the subject, so that the microphone body can reduce the interference of external noise.
  • the audio signal of the subject is clearly collected, the effect of the collected audio signal is improved, and the audio signal is sent to the audio and video transmission device 92 through the wireless microphone transmitter.
  • the audio and video transmission device 92 can be installed on the camera device 95 and connected to the camera device 95 through a physical interface.
  • the audio and video transmission device 92 is used to receive the audio signal sent by the wireless microphone 91 and input it to the camera device 95 through a physical interface.
  • the camera device 95 synchronizes the audio signal and the collected video signal to generate a mixed audio and video signal.
  • the mixed signal is mixed, and the mixed signal is output to the audio and video transmission device 92, and the audio and video transmission device 92 sends the mixed signal to the wireless video receiving terminal 93 so that the director and other staff can view the captured audio and video.
  • the wireless video receiving terminal 93 may be a device installed with an APP, such as a mobile phone, a tablet, a notebook computer, or an independent hardware device, etc.
  • the APP may receive the audio and video signals of the camera 95 from the audio and video transmission device 92.
  • audio signals and mixed signals can use WIFI transmission technology.
  • WIFI IEEE 802.11 transmission technology can be uniformly adopted.
  • the WIFI transmission unit that transmits audio signals and mixed signals can work in the 2.4GHz, 5GHz UNII or 60GHz ISM frequency bands.
  • the networking mode in the transmission process is that the audio and video transmission device 92 works in the AP mode, and the wireless microphone 91 and the wireless video receiving terminal 93 work in the Station mode.
  • the audio signals and mixed signals can be transmitted on the same wireless channel in a time-division multiplexing manner.
  • the structure and processing flow of the wireless microphone 91, the audio and video transmission device 92, and the wireless video receiving terminal 93 are respectively introduced in detail below.
  • FIG. 10 is a schematic structural diagram of the wireless microphone 91.
  • the wireless microphone 91 includes a microphone audio drive amplifier circuit 912, which is connected to the microphone body (910 or 911), and is mainly used for analog audio signal generation and audio signal adjustment.
  • the wireless microphone 91 supports the input of the built-in microphone body 910 and the external microphone body 911.
  • the external microphone body 911 can be connected to impedance change detection or multi-channel audio sampling ADC (Analog to Digital Converter) 913 digital signal detection to prioritize the external microphone body 911 as the main signal input.
  • ADC Analog to Digital Converter
  • the audio drive amplifier circuit 912 After the audio drive amplifier circuit 912 generates the audio analog signal, it is input to the multi-channel audio sampling ADC913.
  • the multi-channel audio sampling ADC913 realizes the conversion of the analog audio signal into the digital original audio signal. It has the functions of pre-amplification gain adjustment and digital level gain adjustment.
  • the CPU915 can pass Control and adjust the pre-amp gain or digital level gain to adjust the input volume of the microphone body.
  • the multi-channel audio sampling ADC 913 sends the original digital audio signal to the audio signal processing and compression coding DSP unit 914.
  • the audio signal processing and compression coding DSP unit 914 realizes low-latency and low-loss compression coding of audio signals and realizes equalization, noise reduction, etc. Audio signal enhancement processing function.
  • the audio signal adopts low-delay and low-loss compression technology, such as one of OPUS-CELT, ADPCM, LC3, and LC3plus compression coding technology.
  • Audio signal processing and compression encoding DSP unit 914 sends the encoded and compressed digital compressed audio signal to CPU 915.
  • CPU915 runs the TCP/IP protocol stack to complete the conversion from digital compressed audio signal to IP data stream, wireless transmission parameter configuration management, and audio parameters Configuration.
  • the CPU915 can also realize the control and management functions of accessories 917 such as buttons/LCD screens and power supplies, and the main control work is completed through the built-in firmware.
  • the CPU 915 transmits the audio signal to the WIFI transmission unit 916
  • the WIFI transmission unit 916 transmits the audio signal to the audio/video transmission device 92 through the antenna 918
  • the WIFI transmission unit 916 works in Station mode
  • the audio/video transmission device 92 works in AP mode
  • Maintain wireless communication connections mainly complete the WIFI wireless transmission protocol stack and wireless transmission link functions, can work alternately through sending-receiving-sleep-wake-up, and support WIFI WMM QOS mechanism and WMM power saving function.
  • the wireless microphone 91 also includes other accessories 917, such as buttons, LCD screens, power supplies, housings, and installation accessories.
  • the audio and video transmission device 92 includes an interface circuit part, which mainly includes an SDI video receiver 920, an HDMI video receiver 921, a CCU control signal interface circuit 922, and a multi-channel audio amplifier processor 923.
  • the interface part of the camera 95 includes SDI video interface 950, HDMI video interface 951, CCU control interface 952, and external microphone audio interface 953.
  • the camera 95 outputs SDI and HDMI standard video interface signals through the SDI video interface 950 and HDMI video interface 951.
  • the signal format is BT1120 or MIPI, sent to the back-end FPGA chip 924, FPGA chip 924 mainly performs video signal processing and bus selection,
  • the FPGA chip 924 selects all the required video and sends it to the system SOC chip 925.
  • the system SOC chip 925 can be a chip that integrates CPU and H.264/H.265 encoding and decoding functions.
  • the SOC chip 925 compresses and encodes the video, digital serial
  • the video interface signal also contains a dual-channel digital audio signal, which can be directly sent to the system SOC chip 925, and separate from the video signal for audio compression processing.
  • the SOC chip 925 is connected to the WIFI transmission unit 926, and sends the compressed mixed signal to the WIFI transmission unit 926, so that the WIFI transmission unit 926 sends to the wireless video receiving terminal 93 through the antenna 9210.
  • the WIFI transmission unit 926 receives the audio signal sent by the wireless microphone 91 and transmits it to the SOC chip 925.
  • the WIFI transmission unit 926 of the audio and video transmission device 92 works in AP mode, can set an independent SSID, can connect multiple wireless microphones 91 and wireless video receivers 93 to form a BSS (Basic Service Set) network system unit ,
  • the WIFI transmission unit 926 supports the WIFI WMM protocol, supports the WMM QOS mechanism and the WMM power saving function.
  • the SOC chip 925 After the SOC chip 925 receives the audio signal collected by the wireless microphone sent by the WIFI transmission unit 926, it can be converted into a digital compressed audio signal.
  • the format of the compressed audio signal is the same as that of the audio signal sent by the wireless microphone.
  • LC3, LC3plus is a kind of low-latency and low-loss audio compression coding algorithm after coding.
  • the SOC chip 925 sends the digital compressed audio signal to the audio signal processing and compression decoding DSP unit 927.
  • the audio signal processing and compression decoding DSP unit 927 completes the decoding and output of the digital compressed audio. At the same time, it can also complete the equalization, sound effect, noise reduction, and volume control.
  • the audio signal recovered after decoding can be 48KHz 16-24bit high-definition audio signal, and then through the multi-channel audio sampling DAC928 to complete the conversion of digital audio to analog audio, the converted analog audio passes through the multi-channel audio amplifier processor 923 After processing, an analog audio signal whose amplitude meets the level requirement of the external microphone audio interface 953 of the camera device 95 is output.
  • the SOC chip 925 is also connected to the camera CCU control interface 953 through the CCU control interface circuit 923, and can support the control of the CCU of the camera 951.
  • the SOC chip 925 is also connected to the accessory 929 through an expansion interface to realize the control of the accessory.
  • Accessories 929 include buttons, LCD screens, power supplies, housings and installation accessories.
  • the wireless video receiving terminal 93 can be a smart terminal with universal WIFI connection function and universal H.264/H.265 video decoding capability, or it can be a stronger hardware H.264/H.265 decoding capability and stronger wireless WIFI performance Independent equipment. If the wireless video receiving end is an independent device, its structure diagram is shown in Figure 12, and its work flow will be introduced below in conjunction with its structure diagram.
  • the wireless video receiving terminal 93 includes a WIFI transmission unit 931, the WIFI transmission unit 931 receives the mixed signal sent by the audio and video transmission device 92 through the antenna 930, the WIFI transmission unit 931 works in Station mode, and the audio and video transmission device 92 works in AP mode to maintain The wireless communication connection receives the mixed audio and video signal sent by the audio and video transmission device 92.
  • the WIFI transmission unit 931 After the WIFI transmission unit 931 receives the mixed signal, it sends it to the system SOC chip 932.
  • the system SOC chip 932 can be a chip that integrates CPU and H.264/H.265 coding and decoding functions.
  • the SOC chip 932 completes the compression of the video signal in the mixed signal. Decoding, the audio signal is decoded at the same time, then the SOC chip 932 will directly send the audio signal in the mixed signal to the SDI transmitter 934 and HDMI transmitter 935, and output the video signal in the mixed signal to the FPGA chip 933, the video signal
  • the format is BT1120 or MIPI, and the FPGA chip 933 is mainly used for processing and bus distribution of the video signal in the mixed signal.
  • FPGA chip 933 completes digital video signal format conversion, image enhancement, auxiliary data and parallel video bus distribution and other functions, and then sends the video signal to SDI transmitter 934 and HDMI transmitter 935, SDI transmitter 934 and HDMI transmitter 935
  • the audio signal and the video signal are transmitted to the monitor or video switcher 94 through the SDI video interface 940 and the HDMI video interface 941 of the monitor or video switcher 94 connected to the wireless video receiving terminal 93.
  • the wireless video receiving terminal 93 also includes an accessory 936, which is connected to the system SOC chip 932 and controlled by the SOC chip 932.
  • the accessories include buttons, LCD screen, power supply, housing, and installation accessories.
  • the audio and video transmission system reduces the transmission of audio signals by adopting low-latency encoding and compression for audio signals, setting the priority of audio signal transmission to be higher than the priority of mixed signals, and adopting short frame structure for audio signals to reduce the transmission of audio signals.
  • Delay through the above technology, the delay of the audio signal is as follows:
  • T0 is the low-latency audio signal compression coding delay, less than 0.5ms.
  • T1 is the buffer delay before the audio signal data frame is encoded, which is related to the data frame length, and the minimum is 2.5ms.
  • T2 is the transmission and reception delay in the audio signal transmission process, which is less than 1ms.
  • T3 is the length of the air interface transmission time slot, which is less than 0.5ms.
  • T4 is the audio signal decoding delay, less than 0.5ms.
  • the total transmission delay of the final audio signal is T0+T1+T2+T3+T4, which can basically achieve the goal of ⁇ 5ms.
  • the audio and video transmission system adopts low-latency compression coding technology for audio signals
  • the wireless microphone 91 adopts single-transmit and single-receive mode
  • the wireless microphone's WIFI transmission unit adopts transmit-receive-sleep-wake-up work.
  • audio and video transmission device 92 buffers the control commands sent to the wireless microphone 91, and then sends them collectively after they are in the awake state, which can greatly reduce the power consumption of the wireless microphone 91.
  • the WIFI transmission unit of the wireless microphone 91 works in the dormant state, its power consumption current can be as low as about 20uA, and the current gradually increases in the wake-up state.
  • the maximum transmission time slot current can reach 200mA, and then it switches to the receiving state, and the current drops to 80mA. , Its power consumption is significantly reduced.
  • the audio signal of the subject is collected by the wireless microphone 91 physically separated from the camera 94. Compared with the microphone built in the camera, the quality of the audio signal can be improved, and the audio and video transmission device 92 can realize audio at the same time. The transmission of signal and mixed signal is convenient for installation.
  • Both the audio signal and the mixed signal can use wireless transmission technology.
  • the audio and video transmission device 92 uses the same wireless transmission channel to transmit the audio signal and the mixed signal in a time-division multiplexing manner, saving channel resources and avoiding interference caused by the transmission of the two signals. .
  • Adopting this transmission scheme can not only take into account the characteristics of small bandwidth occupation, low power consumption, and low delay parameters of the audio signal collected by the wireless microphone, but also ensure the high transmission bandwidth requirements of the wireless video signal. .
  • the transmission priority of the audio signal can be set higher than the priority of the mixed signal through the WMM QOS mechanism to reduce the delay of the audio signal.
  • the audio signal can adopt a short frame structure
  • the WIFI transmission unit of the wireless microphone 91 is configured in a single-transmit and single-receive mode
  • the wireless microphone transmitter can continuously switch between multiple working modes of transmit-receive-sleep-wake up. Audio signals with high real-time requirements can be transmitted immediately after being awakened. For control commands, the amount of data is small and the real-time requirements are not high.
  • the wireless microphone 91 When the wireless microphone 91 is sleeping, it can be buffered by the audio and video transmission device 92, and then collectively sent out after it wakes up. Through the above means, the power consumption of the wireless microphone 91 can be greatly reduced.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Studio Devices (AREA)

Abstract

本说明书实施例提供一种音视频传输系统及音视频传输装置,所述系统包括音视频传输装置以及至少一个无线麦克风发射装置;所述无线麦克风发射装置用于将无线麦克风采集的音频信号发送给所述音视频传输装置;所述音视频传输装置用于分别与视频采集装置和所述无线麦克风发射装置连接,接收所述音频信号并将所述音频信号传输至所述视频采集装置,以及从所述视频采集装置获取所述音频信号和视频信号生成的混合信号,并将所述混合信号处理后输出。本说明书实施例提供的音视频传输系统可以实现音频信号源与视频采集装置距离较远的拍摄场景中的音频信号的采集以及音视频信号和音视频混合信号的传输,并且保证音频信号的质量。

Description

音视频传输装置及音视频传输系统 技术领域
本说明书涉及通信技术领域,尤其涉及一种音视频传输装置及音视频传输系统。
背景技术
影视拍摄、商业视频拍摄以及个人拍摄等场景,在对被拍摄对象进行视频拍摄时,需要同时采集被拍摄对象的视频信号以及需要与视频信号同步播放的音频信号,然后对视频信号和音频信号进行同步对齐处理,得到最终的音视频混合信号并传输给视频接收端。通常,在采用摄像装置拍摄视频时,可以通过摄像装置采集被拍摄对象的视频信号,并且同时通过摄像装置内置的麦克风采集音频信号,同步处理后生成的音视频混合信号通过无线传输技术发送给视频接收端。但是,内置的麦克风只适用于音频信号源与摄像装置距离较近的场景,如果二者相距较远,就会导致内置麦克风的采集效果差,另外,由于摄像装置上的马达、风扇等产生噪音干扰,无法保证所采集的音频信号的质量。
发明内容
基于此,本说明书实施提供了一种音视频传输系统及音视频传输装置。
根据本说明书实施例的第一方面,提供一种音视频传输系统,所述系统包括音视频传输装置以及至少一个无线麦克风发射装置;
所述无线麦克风发射装置用于将无线麦克风采集的音频信号发送给所述音视频传输装置;
所述音视频传输装置用于分别与外部的视频采集装置和所述无线麦克风发射装置连接,接收所述音频信号,将所述音频信号传输给所述视频采集装置,以及从所述视频采集装置获取所述音频信号和视频信号生成的混合信号,并将所述混合信号处理后输出。
根据本说明书实施例的第二方面,提供一种音视频传输装置,所述音视频传输装置分别与外部的视频采集装置和无线麦克风发射装置连接,所述音视频传输装置包括:
音频传输单元,用于从所述无线麦克风发射装置接收无线麦克风采集的音频信号,并将所述音频信号传输至所述视频采集装置;
视频传输单元,用于从所述视频采集装置获取所述音频信号和视频信号混合后的混合信号,并将所述混合信号输出。
应用本说明书实施例方案,利用物理实体上与视频采集装置分离的无线麦克风采集音频信号源的音频信号,采集的音频信号不是直接由无线麦克风 传输给视频采集装置,而是利用一音视频传输系统传输,音视频传输系统中具有无线麦克风发射装置和音视频传输装置,音频信号由无线麦克风发射装置发给音视频传输装置,再由音视频传输装置传输给视频采集装置。为了可以利用视频采集装置目前已有的功能,在设计本方案时,仍然可由视频采集装置完成音频信号和视频信号的同步处理,生成音视频混合信号,并将混合信号输出至音视频传输装置,以便音视频传输装置将混合信号发送至视频接收端,因此无需对视频采集装置的已有功能进行改变。本说明书实施例的方案可以实现音频信号源与视频采集装置距离较远的拍摄场景的音频信号采集以及音频信号和音视频混合信号的传输。通过将麦克风置于方便采集音频信号源的位置采集音频信号,可以提高采集的音频信号的质量,保证了最终采集的音视频的效果。且通过自行设计的音视频传输装置同时实现音频信号和音视频混合信号的传输,解决了音频信号源与视频采集装置距离较远的拍摄场景中的音频信号和音视频混合信号的传输问题,通过一个装置实现两种信号的传输,也简化了拍摄过程中设备的安装流程。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本说明书。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本说明书的实施例,并与说明书一起用于解释本说明书的原理。
图1是本说明书一个实施例的一种音视频传输系统的示意图。
图2-6是本说明书实施例的一种音视频传输装置的结构示意图。
图7是本说明书实施例的一种音频传输单元的结构示意图。
图8是本说明书实施例的一种视频传输单元的结构示意图。
图9是本说明书一个实施例的一种音视频传输系统的示意图。
图10是本说明书一个实施例的一种无线麦克风的结构的示意图。
图11是本说明书一个实施例的一种音视频传输装置的结构的示意图。
图12是本说明书一个实施例的一种无线视频接收端的结构的示意图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本说明书相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本说明书的一些方面相一致的装置和方法的例子。
在本说明书使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本说明书。在本说明书和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。
应当理解,尽管在本说明书可能采用术语第一、第二、第三等来描述各 种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本说明书范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语“如果”可以被解释成为“在......时”或“当......时”或“响应于确定”。
在影视拍摄、商业视频拍摄或者个人视频拍摄场景,可以通过摄像装置同时从被拍摄对象采集视频信号以及从音频信号源采集音频信号,将音频信号和视频信号同步处理后得到音视频混合信号,并输出给视频接收端。在某些场景,视频接收端与摄像装置距离较远,因而,可以通过无线传输技术将摄像装置采集的音视频混合信号发送给视频接收端,以便在视频接收端可以远程查看。目前,摄像装置在采集被拍摄对象的音频信号时,多采用摄像装置内置的麦克风来采集,这种方式在摄像装置与音频信号源距离较近时比较适用,但是当摄像装置与音频信号源距离较远时,采集的音频信号质量较差,影响播放效果。
基于此,本说明书实施例提供一种音视频传输装置以及音视频传输系统,用于实现音频信号源与视频采集装置距离较远的拍摄场景中,音频信号的采集以及音频信号与音视频混合信号的传输。
如图1所示,所述音视频传输系统10可包括音视频传输装置11以及无线麦克风发射装置12。无线麦克风发射装置12的数量不作限制,图1中示出两个无线麦克风发射装置12作为举例。其中,一个无线麦克风发射装置12 与一个无线麦克风13连接,用于将无线麦克风13采集的音频信号源的音频信号发送给音视频传输装置11。音视频传输装置11还与视频采集装置14连接,视频采集装置14用于采集被拍摄对象的视频信号。音视频传输装置11将从无线麦克风发射装置12接收的音频信号发送给视频采集装置14,以便视频采集装置14将被拍摄对象的音频信号和视频信号进行同步处理,生成音视频混合信号(以下简称为混合信号),并将混合信号输出至音视频传输装置11,以便音视频传输装置11将混合信号输出。
音频信号源可能存在多种示例,例如,作为音频信号源的可能是被拍摄对象、也可能是需要与被拍摄对象的视频同步播放的其他声音来源(比如对视频进行解说的解说员的声音)等等。
在远距离拍摄场景,由于音频信号源与视频采集装置14距离较远,如采用视频采集装置14内置的麦克风,收音效果较差,本申请中采用外置的无线麦克风13。为了采集到更加清晰的音频信号,提升音频信号的质量,无线麦克风13可以置于靠近音频信号源的位置,比如,如果音频信号源是视频的解说员,可以将无线麦克风13携带在解说员的身上。为了便于携带,无线麦克风发射装置12与无线麦克风13可以采用一体化设计,比如,可以设计成手持式或者小型一体化领夹式。无线麦克风13采集到音频信号源的音频信号后,通过无线麦克风发射装置12发送给音视频传输装置11。无线麦克风发射装置12可以通过各种无线传输技术将采集的音频信号发送给音视频传输装置11,比如、蓝牙、WIFI、Zigbee等无线传输技术。考虑到传输距离和传输效 果,在音频信号源与视频采集装置14距离较远的拍摄场景,通常选用WIFI传输技术。比如,可以将采集的音频信号通过数字调制的方式调制后,通过1.9GHz DECT(Digital Enhanced Cordless Telecommunications Digital Enhanced Cordless Telecommunications:数字增强无绳通信),2.4GHz ISM或5GHz ISM频段对调制后的音频信号进行传输。
通常,为了满足现场立体声或多通道收音需求,一个音视频传输装置11可以同时连接至少两个无线麦克风发射装置12,每个无线麦克风发射装置12将与其连接的无线麦克风13采集的音频信号分别发送给音视频传输装置11。一个音视频传输装置11连接多个无线麦克风发射装置12时,可以采用TDMA(Time Division Multiple Access:时分多址)的传输方式,不同的无线麦克风发射装置12占用不同时隙,多个无线麦克风发射装置12与音视频传输装置11进行数据传输时只占用一个独立无线频道。
音视频传输装置11与视频采集装置14之间可以采用物理连接的方式,也可以采用无线连接的方式。通常,音视频传输装置11可以放置在靠近视频采集装置14的位置,比如,视频采集装置14上设计有与音视频传输装置11连接的物理接口,两者可以通过物理接口进行连接,当然,为了节约空间,方便拍摄过程中视频采集装置14与音视频传输装置11的移动,音视频传输装置11可以固定安装在视频采集装置14上。
音视频传输装置11接收到无线麦克风发射装置12发送的音频信号后,可以将音频信号输入到视频采集装置14,视频采集装置14将接收到的音频信 号与采集的视频信号进行同步对齐处理,生成使用该音频信号对该视频信号伴音后的混合信号,然后再将混合信号输出给音视频传输装置11,音视频传输装置11对混合信号进行处理后输出。比如,在某些实施例中,音视频传输装置11可以具有显示屏等部件,将混合信号解码后输出至显示界面以供用户观看。
在某些实施例中,音视频传输装置11也可以将混合信号发送给其他的视频接收端,以便用户通过视频接收端观看视频,比如,在很多视频拍摄场景,导演需远程监控拍摄效果,这时可以将混合信号发送给远程监控设备,以便导演监控拍摄效果。为了实现远距离传输,且无需布线,音视频传输装置11也可以通过无线传输技术将混合信号输出给视频接收终端,比如、蓝牙、WIFI、Zigbee等无线传输技术。考虑到音频信号源与视频采集装置14距离较远的拍摄场景,传输距离可能较远,且视频数据量较大,可以选用WIFI传输。比如,可以采用通用视频压缩编码技术(例如H.264/H.265)对混合信号进行编码后,采用通用WIFI传输技术,例如通过2.4GHz、5GHz UNII WIFI及60GHz 802.11d WIFI将混合信号传输给视频接收端。视频接收端可以是具有通用WIFI连接功能和通用H.264/H.265视频解码能力的智能终端,比如、手机、电脑等个人终端,也可以是具有更强硬件H.264/H.265解码能力及更强无线WIFI性能的独立设备。
本说明书实施例中的音视频传输系统,通过无线麦克风13采集音频信号源的音频信号,然后通过无线麦克风发射装置12发送给音视频传输装置11, 音视频传输装置11将音频信号输入到视频采集装置14,视频采集装置14将音频信号和采集的视频信号同步后生成混合信号,然后将混合信号输出至音视频传输装置11,音视频传输装置11再将混合信号输出。通过外置的无线麦克风13采集音频信号源的音频信号,可以提升音频信号的质量,且可以通过自行设计的音视频传输装置11来传输音视频信号,将音频信号输入至视频采集装置14,实现音频信号与视频信号的同步,生成音视频混合信号。并且音视频传输装置11还可以同时实现混合信号的传输,将音视频混合信号发送至视频接收端。通过一个音视频传输装置11,同时实现音频信号和音视频混合信号的传输,既方便拍摄过程中的设备安装,又可以解决远距离拍摄场景的音频采集与传输问题,提升了最终采集的音视频的效果。
如图2所示,本说明书实施例提供的音视频传输装置11可以包括视频传输单元110和音频传输单元111,音频传输单元111通过无线传输通道接收无线麦克风发射装置12传输的音频信号,并传输至视频采集装置14,视频传输单元110将视频采集装置14输出的音频信号和视频信号同步后的混合信号通过无线传输通道发送至视频接收端15。
在某些实施例中,音视频传输装置11在传输音频信号和混合信号时可以独立传输,比如,如图3所示,音视频传输装置11可以包含视频传输单元110、音频传输单元111、配件112以及多根天线(图中的113和114),配件112可能包括电池、按键、显示屏、外壳等。视频传输单元110和音频传输单元111各自采用独立的无线传输通道和天线,通过各自的无线传输通道进行 数据的传输。比如,音频传输单元111可以通过天线114接收无线麦克风发射装置12发送的音频信号,并输出至视频采集装置14,视频传输单元110从视频采集装置14获取混合信号,通过天线113将混合信号发送至视频接收端15,为了避免两种信号传输造成的相互干扰,视频传输单元110和音频传输单元111可以工作于不同的频段,比如视频传输单元110工作于UNII 5GHz WIFI,而音频传输单元111工作于UHF(Ultra High Frequency:特高频)或2.4GHz。当然,在某些实施例中,如图4所示,为了减少天线数量,音视频传输装置11还可以包括双频双工滤波器116和双频天线115,通过双频双共滤波器116实现射频合路,然后通过双频天线115传输,以便减少天线数量。
当然,由于音频信号和混合信号各自采用独立的传输通道,既占用无线传输通道资源,比如,至少要占用两个无线通道,并且两种信号的传输也会造成相互干扰。为了减少对无线传输通道资源的占用,并且减小两种信号传输之间的干扰。在某些实施例中,音视频传输装置11可以按照时分复用的方式采用同一个无线通道进行传输音频信号和混合信号,即音频信号和混合信号可以采用统一的无线传输技术,比如,可以统一采用WIFI IEEE 802.11传输技术。由于无线视频信号传输特点是码流带宽高,但是可接受较长时间的延时,而无线麦克风音频信号的特点是码流带宽低,但是对延时的要求较为严格,因而将两种信号通过同一个无线通道传输,既可兼顾无线麦克风音频信号带宽占用小,低功耗,延时参数低等特点,又可保证无线视频信号高传输带宽的要求,各取所需。另外,音频信号和混合信号可以通过2.4GHz, 5GHz UNII或60GHz ISM频段进行传输,可以保证无线视频传输高速率带宽的要求,而较少干扰的5GHz UNII和60GHz ISM频段也可为音频信号的实时传输提供更可靠的保障。
在某些实施例中,音频信号和混合信号均采用WIFI技术进行传输,两种信号可以共用WIFI传输通道。如图5所示,音视频传输装置还可以包括WIFI传输单元117,视频传输单元110和音频传输单元111可以共用WIFI传输单元117,音频传输单元111通过WIFI传输单元117从无线麦克风发射装置12接收音频信号,视频传输单元110通过WIFI传输单元117将混合信号发送至视频接收端15。
在某些实施例中,如图6所示,音视频传输装置11还包括中央处理单元118,中央处理单元118用于从视频传输单元110接收混合信号,分别对所述混合信号中音频信号和视频信号进行压缩编码后输出给WIFI传输单元117,以便WIFI传输单元117传输至视频接收端15,以及从WIFI传输单元117接收无线麦克风发射装置12发送的音频信号,转换成数字压缩音频信号后发送给音频传输单元111。当音视频接收装置11中的WIFI传输单元117从无线麦克风发射装置12接收音频信号后,可以先将音频信号发送给中央处理单元118,中央处理单元118将音频信号转换成数字压缩音频信号,转化得到的数字压缩音频信号的格式同无线麦克风发送装置12发送的音频信号的格式相同,可以是经过ADPCM,Opus-CELT等开源编码算法编码后的一种。中央处理单元118再将该数字压缩音频信号发送给音频传输单元110,以便音频传 输单元110进一步处理后得到模拟音频信号再输出给视频采集装置14。同时,视频采集装置14将音频传输单元111输入的音频信号与采集的视频信号同步处理,生成音视频混合信号,并将混合信号输出给视频传输单元110,视频传输单元110将混合信号发送给中央处理的单元118,以便中央处理单元118分别对混合信号中音频信号和混合信号中的视频信号进行压缩编码后得到编码后的混合信号,再将编码后的混合信号输出给WIFI传输单元117,通过WIFI传输单元117发送至视频接收端15。在某些实施中,中央处理单元118可以是集成CPU和H.264/H.265编解码功能的SOC芯片。
在某些实施例中,如图7所示,音频传输单元111可以包括音频处理子单元111a、采样子单元111b以及第一接口子单元111c,其中,音频处理子单元111a用于获取中央处理单元118转化后的数字压缩音频信号,并对该数字压缩音频信号进行解码处理,同时,还可完成对该数字压缩音频信号的均衡、音效、降噪、音量电平调节的处理。采样子单元111b用于对解码处理后的数字压缩音频信号进行采样恢复得到模拟音频信号,在某些实施例中,采样子单元111b可以是多通道音频采样DAC(Digital to Analog Converter:数模转换器)。第一接口子单元111c用于将采样恢复得到的模拟音频信号输出给所述视频采集装置14,以便视频采集装置14将采样恢复得到的音频信号与采集的视频信号同步处理。其中,第一接口子单元111c包括多通道放大处理器,用于对采样恢复得到的模拟音频经过多通道音频放大处理,以输出幅度满足视频采集装置14外置麦克风输入接口电平要求的模拟音频信号。
在某些实施例中,如图8所示,视频传输单元110可以包括第二接口子单元110a、视频信号处理子单元110b,第二接口子单元110a用于从视频采集装置14接收所述混合信号,并分离出混合信号中的音频信号和视频信号,将混合信号中的音频信号传输给中央处理单元118以进行压缩编码处理,视频信号处理子单元110b用于对视频信号进行处理后发送给中央处理单元118,以进行压缩编码处理。在某些实施例中,第二接口子单元110a包括SDI视频接收器,HDMI视频接收器,视频采集装置14输出SDI和HDMI标准的接口信号,该接口信号为包含音频信号和视频信号的混合信号,其中,混合信号中的视频信号通过SDI视频接收器和HDMI视频接收器转换成标准数字并行视频信号,信号格式为BT1120或MIPI,然后送入视频处理子单元110b,视频处理子单元110b可以是FPGA芯片,FPGA芯片选择一路视频信号发送给送给中央处理单元118,以便中央处理单元118对视频信号进行压缩编码处理。混合信号中的音频信号则直接输出给中央处理单元110,与视频信号分开做压缩编码处理。
在某些实施例中,所述音视频传输装置11还包括CCU(Camera Control Unit,摄像机控制单元)控制接口,中央处理单元118通过所述CCU控制接口对视频采集装置14的CCU进行控制,以便完成视频采集装置14的镜头、图像参数、辅助数据的调节。
在某些实施例中,音视频传输装置11还包括与中央处理单元118连接的扩展接口,用于连接以下一种或多种配件:LCD屏幕、按键、电源。比如, 音视频传输装置11可以包括屏幕、按键、电源等配件,这些配件通过扩展接口与中央处理单元118连接,以便中央处理单元118实现对这这些配件的控制。
在某些应用场景,音视频传输装置11在接收到视频采集装置14输出的混合信号后,可以发送给其他的视频接收终端15。比如,在影视拍摄领域或商业拍摄领域,视频采集装置14采集的音视频混合信号需发送给视频接收终端15,以便导演,灯光师等工作人员可以根据视频的效果做出相应的调整。为了方便这些工作人查看拍摄好的视频,在某些实施例中,音视频传输系统10还可以包括安装于电子设备上的APP,作为视频接收端15,音视频传输装置11可以将音视频信号同步后的混合信号发送至APP,以便用户通过APP查看拍摄好的音视频。电子设备可以是用户的手机、平板、电脑等终端设备,用户预先在电子设备安装指定的APP,通过APP从音视频传输装置11接收音视频信号同步后的混合信号。
在某些实施例中,音视频传输装置11可以与无线麦克风发射装置12以及安装有APP的电子设备通过WIFI通道连接,并且音视频传输装置11的WIFI传输单元的工作模式为AP模式,无线麦克风发射装置12的WIFI传输单元和安装有APP的电子设备的工作模式为Station模式。比如,音视频传输装置11可以开启WIFI热点,无线麦克风发射装置12和安装有APP的电子设备可以搜索音视频传输装置11的SSID,通过预设的密码与音视频传输装置11建立WIFI连接。然后无线麦克风发射装置12可以通过建立的WIFI连接 通道向音视频传输装置11传输音频信号,而安装有APP的电子设备可以通过建立的WIFI连接通道从音视频传输装置11接收混合信号。
应用于视频拍摄领域的无线麦克风音频信号一般要求非常低的传输延时,通常小于10ms,同时具有50至数百米的传输距离。对无线麦克风音频信号的传输对延时要求较为严格是因为:如果延时严重,则会出现声音和图像画面不同步的问题,严重影响最终的视频效果。因此,减小无线麦克风音频信号传输的延时十分关键。无线麦克风13传输的音频信号发生延时的原因主要包括以下几个方面:第一、无线麦克风13采集到音频信号源的音频信号后,无线麦克风发射装置12通常需要将音频信号进行压缩编码处理,再通过无线通道发送给音视频传输装置11,通常音频信号可以逐帧数据依次进行编码,所以,在采集音频信号时,会先将采集的音频数据缓存,等待缓存的音频数据长度达到一帧后再进行编码,无线麦克风发射装置12在等待采集的音频数据达到一帧长度的过程也会产生一定的延时,这个延时与音频数据的帧长有关。第二、对音频信号进行压缩的过程需要消耗一定的时间,会造成一部分延时。与之相对应的,在音视频传输装置11接收到音频信号后,需要将其解码,则解码过程也会造成一定的延时。第三、编码完成后,音频信号按时隙发送时,时隙之间会产生一定的空口时隙延时。第四、音频信号从无线麦克风发射装置12发送到音视频传输装置11,会产生一定的收发延时。
在某些实施例中,为了减少因音频信号编解码带来的延时,在对音频信号进行压缩编码时,可以采用低延时低损压缩技术对音频信号进行压缩编 码,所述低延时损压缩技术可以是OPUS-CELT、ADPCM(Adaptive Differential Pulse Code Modulation:自适应差分脉冲编码调制)、LC3、LC3plus中的一种。
由于OPUS-CELT、ADPCM、LC3、LC3plus等低延时低损压缩技术在对音频信号进行压缩时,压缩的每帧音频信号的帧长可以自行设置,且对于短帧结构的音频信号,压缩后的音频信号的质量也可以保证。因而,在某些实施例中,为了减少上述第一方面原因造成的延时,可以将无线麦克风13采集的音频信号帧长配置成小于指定长度,音频信号的帧长尽可能采用短帧结构,比如,小于2.5ms,以便减小音频信号因帧内缓存导致的等待延时。当然,指定长度可以根据实际情况去设置,比如可以设置成上述低延时低损压缩技术支持的最小帧长。
在某些实施例中,由于音频信号传输和混合信号传输共用一个无线传输通道,而音频信号对延时的要求高于音视频混合信号,为了减小音频信号的传输延时,音频信号被传输的优先级高于混合信号被传输的优先级,保证无线麦克风音频信号优先传送,从而降低其收发延时,提高传输的可靠性。当然,无线麦克风发射装置12与音视频传输装置11之间除了传输音频信号,还会传输一些控制指令,在音视频传输装置11传输的各种数据中,可以将音频信号的传输优先级设置成最高,保证音频信号最先传送。比如,无线麦克风发射装置12与音视频传输装置11均采用WIFI传输技术,无线麦克风发射装置12与音视频传输装置11的WIFI传输单元可以支持WMM QOS机制, 通过WMM QOS机制来设置各传输信号的优先级。
由于音频信号和混合信号均采用WIFI传输技术,相比于蓝牙技术和Zigbee技术,WIFI技术的功耗更大。而在视频拍摄领域,通常需要将无线麦克风13与无线麦克风发射装置12设计成一体化形式,这样无线麦克风设备的整体尺寸更小,重量更轻,便于被拍摄对象佩戴安装,这就要求供电电池体积要更小,从而电池容量也更小。所以,如何降低无线麦克风发射装置12的功耗也非常关键。
在某些实施例中,无线麦克风发射装置12利用WIFI技术向音视频传输装置11传输音频信号,无线麦克风发射装置12和音视频传输装置11均包含WIFI传输单元,为了减小无线麦克风发射装置12的功耗,无线麦克风发射装置12的WIFI传输单元可以交替的工作于休眠状态和唤醒状态,休眠状态功耗更低,无线麦克风发射装置12的WIFI传输单元不必一直处于唤醒状态,可以减少电量消耗。比如,在某些实施例中,无线麦克风发射装置12的WIFI传输单元以预设时间间隔从休眠状态切换到唤醒状态,并在本次唤醒状态下执行完传输任务后,再从唤醒状态切换到休眠状态,传输任务可以是指将无线麦克风13采集的音频信号发送给音视频传输装置11、从音视频传输装置11接收控制指令等任务。比如,无线麦克风发射装置12的WIFI传输单元可以每隔3ms从休眠状态唤醒,唤醒后,首先将音频信号发送给音视频传输装置11,然后接收音视频传输装置11发送的控制指令,完成后又进入休眠模式。
使WIFI传输单元省电的关键是尽量降低发送时隙持续时间,所以可以从以下几方面来降低发送时隙持续时间:第一、降低发送数据速率,比如可以采用低延时压缩编码技术对音频信号进行编码,可降低4倍以上发送数据速率。第二、提高空口传输速率,比如工作于更高MCS(Modulation and Coding Scheme,调制与编码策略)速率配置。第三、降低接收时隙持续时间,比如,由于音视频传输装置11发送给无线麦克风发射装置12的一些控制指令通常对实时性的要求都不高,因而在无线麦克风发射装置12处于休眠状态时,音视频传输装置11可以将这些控制指令先缓存,等待无线麦克风发射装置12被唤醒后,再发送给无线麦克风发射装置12,这样也可降低无线麦克风发射装置12的接收时隙时间,达到节电目的。举个例子,音视频传输装置11的WIFI传输单元可以采用WMM QOS省电机制中的下行数据包缓存技术,音视频传输装置11对于发送给无线麦克风13的控制指令(即下行数据包)可以采用触发方式发送,当音视频传输装置11没有接收到无线麦克风发射装置12发送的触发帧时,则将控制指令缓存,当收到触发帧时,说明此时无线麦克风发射装置12处于唤醒状态,这时便可以将控制指令发送给无线麦克风发射装置12。
总体来说,为了尽可能地降低功耗,无线麦克风发射装置12可以在发射-接收-休眠-唤醒等工作状态之间不断切换。可见,无线麦克风发射装置12具有被周期性唤醒的特性,由于音频信号实时性要求更高,无线麦克风发射装置12唤醒后可以立即将音频信号发送给音视频传输装置11,而对于音视频传 输装置11发送的控制指令,由于数据量小且实时性要求不高,因而在无线麦克风发射装置12休眠时,可由音视频传输装置11将控制信号缓存,待无线麦克风发射装置12被唤醒后再集中发送。无线麦克风发射装置12在唤醒后,会先发送音频信号,再接收音视频传输装置11发送的控制指令。通过这种方式,可以大大降低无线麦克风发射装置12的功耗。
在某些实施例中,为了进一步降低无线麦克风发射装置12的功耗,无线麦克风发射装置12的WIFI传输单元被配置为单发单收模式(SISO:single input single out模式),可以采用单根天线对数据进行收发,以减少天线发射接收通道数量,达到节省功耗目的。
为了进一步解释本说明书实施例提供的音视频传输装置和音视频传输系统,以下以一个具体的实施例加以说明。
在影视拍摄和商业视频拍摄场景,通常是摄影师通过摄像装置采集被拍摄对象的视频,然后通过无线传输技术发送至导演等工作人员,以便对拍摄效果进行远程监控。在拍摄过程可以同时采集被拍摄对象的视频信号和音频信号,通常摄像装置有内置的麦克风,接收被拍摄对象的音频信号,但对于被拍摄对象与摄像装置距离较远时,采用内置麦克风的收音效果较差,严重影响最终采集的音视频的效果。
本说明书实施例提供一种音视频传输系统,可以很好地解决被拍摄对象与摄像装置距离较远的拍摄场景中的音频信号的采集与传输问题。如图9所示,该音视频传输系统包括无线麦克风91,其中,无线麦克风91的数量不限 制,图中以两个为例。音视频频传输装置92以及无线视频接收端93,无线视频接收端93的数量也不限制,无线视频接收端可以是具有通用WIFI连接功能和通用H.264/H.265视频解码能力的智能终端,也可以是具有更强硬件H.264/H.265解码能力及更强无线WIFI性能的独立设备,该独立设备可以连接切换台或者监控台94。无线麦克风91可以采用麦克风本体与上述实施例中的无线麦克风发射装置一体化设计,比如,可以设计成微型的领夹模式,由被拍摄对象携带,这样麦克风本体便可以减小外界噪声的干扰,清晰地采集被拍摄对象的音频信号,提升采集的音频信号的效果,通过无线麦克风发射装置发送给音视频传输装置92。
音视频传输装置92可以安装于摄像装置95上,并通过物理接口与摄像装置95连接。该音视频传输装置92用于接收无线麦克风91发送的音频信号,并通过物理接口输入至摄像装置95,摄像装置95将音频信号和采集的视频信号进行同步对齐处理,生成音视频信号混合后的混合信号,并将混合信号输出给音视频传输装置92,音视频传输装置92将混合信号发送给无线视频接收端93,以便导演等工作人员查看拍摄的音视频。其中,无线视频接收端93可以是安装有APP的设备,比如、手机、平板、笔记本电脑或者是独立的硬件设备等,该APP可以从音视频传输装置92接收摄像装置95的音视频信号。
其中,音频信号和混合信号可以采用WIFI传输技术,比如,可以统一采用WIFI IEEE 802.11传输技术,传输音频信号和混合信号的WIFI传输单元 可以工作于2.4GHz,5GHz UNII或60GHz ISM频段。传输过程中的组网方式为音视频传输装置92工作于AP模式,无线麦克风91和无线视频接收端93工作于Station模式。
为了节省无线频道资源,并且减少音频信号和混合信号传输过程中的相互干扰,音频信号和混合信号可以按照时分复用的方式采用同一个无线通道进行传输。
以下分别对无线麦克风91、音视频传输装置92、无线视频接收端93的结构以及处理流程进行详细的介绍。
1、无线麦克风
以下结合无线麦克风91的结构示意图图10来介绍其处理流程。
无线麦克风91包括麦克风音频驱动放大电路912,音频驱动放大电路912与麦克风本体(910或911)连接,主要用于模拟音频信号产生及音频信号的调节。无线麦克风91支持内置麦克风本体910和外置麦克风本体911的输入,当同时接入内置麦克风910和外置麦克风本体911时,可以通过外置麦克风本体911接入阻抗变化检测或通过多通道音频采样ADC(Analog to Digital Converter:模数转换器)913的数字信号检测来优先选择外置麦克风本体911作为主信号输入,没有外置麦克风本体911接入时,系统默认以内置麦克风本体910作为音频输入信号输入。
音频驱动放大电路912生成音频模拟信号后,输入至多通道音频采样ADC913,多通道音频采样ADC913实现模拟音频信号转换为数字原始音频信 号,具有预放大增益调节和数字电平增益调节功能,CPU915可通过控制调整预放大增益或数字电平增益来实现麦克风本体输入音量的调节。
多通道音频采样ADC 913将数字原始音频信号发送至音频信号处理及压缩编码DSP单元914,音频信号处理及压缩编码DSP单元914实现音频信号的低延时低损压缩编码和实现均衡,降噪等音频信号增强处理功能。为了减小音频信号的传输延时,音频信号采用低延时低损压缩技术,比如OPUS-CELT、ADPCM、LC3、LC3plus压缩编码技术中的一种。
音频信号处理及压缩编码DSP单元914将编码压缩后的数字压缩音频信号发送给CPU 915,CPU915运行TCP/IP协议栈完成从数字压缩音频信号到IP数据流转换,无线传输参数配置管理,音频参数配置。同时,CPU915还可以实现对按键/LCD屏幕、电源等配件917的控制管理功能,主要控制工作通过内置固件完成。
CPU 915将音频信号传输给WIFI传输单元916,WIFI传输单元916通过天线918将音频信号传输给视音频传输装置92,WIFI传输单元916工作于Station模式,和工作于AP模式的音视频传输装置92保持无线通信连接,主要完成WIFI无线传输协议栈和无线传输链路功能,可通过发送-接收-休眠-唤醒方式交替工作,可支持WIFI WMM QOS机制以及及WMM省电功能。
此外,无线麦克风91还包括其他配件917,比如,按键,LCD屏幕,电源、外壳及安装配件等。
2、音视频传输装置
以下结合音视频传输装置92的结构示意图图11来介绍其工作流程。
音视频传输装置92包括接口电路部分,接口电路部分主要包括SDI视频接收器920,HDMI视频接收器921,CCU控制信号接口电路922及多通道音频放大处理器923等构成。
摄像装置95的接口部分包括SDI视频接口950、HDMI视频接口951、CCU控制接口952以及外置麦克风音频接口953,摄像装置95通过SDI视频接口950、HDMI视频接口951输出SDI和HDMI标准视频接口信号,并通过SDI视频接收器920和HDMI视频接收器921转换成标准数字并行视频信号,信号格式为BT1120或MIPI,送入后端FPGA芯片924,FPGA芯片924主要进行视频信号的处理及总线选择,FPGA芯片924选择一路需要的视频发送给系统SOC芯片925,系统SOC芯片925可以是集成CPU和H.264/H.265编解码功能的芯片,SOC芯片925对视频进行压缩编码处理,数字串行视频接口信号中还包含双通道数字伴音音频信号,可直接送入系统SOC芯片925,和视频信号分开做音频压缩处理。
SOC芯片925与WIFI传输单元926连接,将压缩处理后混合信号发送给WIFI传输单元926,以便WIFI传输单元926通过天线9210发送给无线视频接收端93。同时,WIFI传输单元926接收无线麦克风91发送的音频信号,并传输给SOC芯片925。音视频传输装置92的WIFI传输单元926工作于AP模式,可设置一个独立SSID,可连接多个无线麦克风91和无线视频接收端93,构成一个BSS(Basic Service Set:基本服务集)网络系统单元,WIFI传 输单元926支持WIFI WMM协议,支持WMM QOS机制和WMM省电功能。
SOC芯片925接收到WIFI传输单元926发送的无线麦克风采集的音频信号后,可以转换成数字压缩音频信号,压缩音频信号的格式同无线麦克风发送的音频信号格式相同,为经过ADPCM,Opus-CELT、LC3、LC3plus低延时低损音频压缩编码算法编码后的一种。SOC芯片925将数字压缩音频信号发送给音频信号处理及压缩解码DSP单元927,音频信号处理及压缩解码DSP单元927完成数字压缩音频的解码输出,同时还可完成均衡,音效,降噪,音量电平调节的处理功能,解码后恢复的音频信号可以是48KHz 16-24bit高清音频信号,然后通过多通道音频采样DAC928完成数字音频到模拟音频的转换,转换后模拟音频经过多通道音频放大处理器923处理,输出幅度满足摄像装置95的外置麦克风音频接口953电平要求的模拟音频信号。
此外,SOC芯片925还通过CCU控制接口电路923与摄像装置CCU控制接口953连接,可支持对摄像装置951的CCU进行控制。
SOC芯片925还通过扩展接口与配件929连接,实现对配件的控制。配件929包括按键,LCD屏幕,电源、外壳及安装配件等。
3、无线视频接收端
无线视频接收端93可以是具有通用WIFI连接功能和通用H.264/H.265视频解码能力的智能终端,也可以是具有更强硬件H.264/H.265解码能力及更强无线WIFI性能的独立设备。如无线视频接收端为独立设备,其结构示意图如图12所示,以下结合其结构示意图来介绍其工作流程。
无线视频接收端93包括WIFI传输单元931,WIFI传输单元931通过天线930接收音视频传输装置92发送的混合信号,WIFI传输单元931工作于Station模式,和工作与AP模式的音视频传输装置92保持无线通信连接,接收音视频传输装置92发送的音视频混合后的混合信号。
WIFI传输单元931接收混合信号后,发送给系统SOC芯片932,系统SOC芯片932可以是集成CPU和H.264/H.265编解码功能的芯片,SOC芯片932完成混合信号中的视频信号的压缩解码,同时完成音频信号的解码,然后SOC芯片932将直接将混合信号中的音频信号送给SDI发送器934和HDMI发送器935,并且将混合信号中的视频信号输出给FPGA芯片933,视频信号的格式为BT1120或MIPI,FPGA芯片933主要用于对混合信号中的视频信号进行处理及总线分发。FPGA芯片933完成数字视频信号格式转换,图像增强,辅助数据和并行视频总线分发等处理等功能,然后将视频信号发送给SDI发送器934和HDMI发送器935,SDI发送器934和HDMI发送器935将音频信号和视频信号通过与无线视频接收端93连接的监视器或视频切换台94的SDI视频接口940和HDMI视频接口941传输至监视器或视频切换台94。
无线视频接收端93还包括配件936,配件936与系统SOC芯片932连接,并由SOC芯片932控制,配件包括按键、LCD屏幕、电源、外壳及安装配件等。
本说明书实施例提供的音视频传输系统通过对音频信号采用低延时编码压缩、设置音频信号传输的优先级高于混合信号的优先级、音频信号采用短 帧结构等方式来减少音频信号的传输延时,通过上述技术,音频信号的延时情况如下:
T0为低延时音频信号压缩编码延时,小于0.5ms。
T1为音频信号数据帧编码前的缓存延时,与数据帧长有关,最小为2.5ms。
T2为音频信号传输过程中的收发延时,小于1ms。
T3为空口发送时隙长度,小于0.5ms。
T4为音频信号解码延时,小于0.5ms。
最终音频信号的传输总延时为T0+T1+T2+T3+T4,基本可实现<5ms目标。
此外,本说明书实施例提供的音视频传输系统通过采取音频信号采用低延时压缩编码技术、无线麦克风91采用单发单收模式、无线麦克风的WIFI传输单元采用发射-接收-休眠-唤醒的工作模式、音视频传输装置92将发送给无线麦克风91的控制指令缓存,待其处于唤醒状态再集中发送等方式,可以大大降低无线麦克风91的功耗。无线麦克风91的WIFI传输单元工作于休眠状态时其耗电电流可低到20uA左右,唤醒状态时电流逐渐增大,最大的发送时隙电流可达200mA,后转入接收状态,电流下降到80mA,其功耗明显降低。
通过以上描述可以看出,本实施例提供的音视频传输系统,可以具有以下优势:
(1)通过物理上与摄像装置94分离的无线麦克风91采集被拍摄对象的音频信号,相比于内置于摄像装置的麦克风,可以提升音频信号的质量,且通过音视频传输装置92同时实现音频信号和混合信号的传输,方便安装。
(2)音频信号和混合信号可以均采用无线传输技术,音视频传输装置92按照时分复用的方式采用同一个无线传输通道传输音频信号和混合信号,节省信道资源,避免两种信号传输产生干扰。采用这种传输方案,既可兼顾无线麦克风采集的音频信号带宽占用小,功耗低,延时参数低的特点,又可保证无线视频信号的高传输带宽要求,音视频传输业务各取所需。
(3)由于音频信号对实时性要求更高,因而可以通过WMM QOS机制设置音频信号的传输优先级高于混合信号的优先级,降低音频信号的延时。
(4)通过对音频信号进行低延时低损压缩,降低音频信号的传输延时。
(5)音频信号可以采用短帧结构,无线麦克风91的WIFI传输单元被配置为单发单收模式,无线麦克风发射装置可以在发射-接收-休眠-唤醒的多种工作模式之间不断切换,对于实时性要求高的音频信号可唤醒后立即发射,对于控制指令,数据量小且实时性要求不高,无线麦克风91休眠时,可由音视频传输装置92缓存,待其唤醒后再集中发出。通过上述手段,可以大大降低无线麦克风91的功耗。
以上实施例中的各种技术特征可以任意进行组合,只要特征之间的组合不存在冲突或矛盾,但是限于篇幅,未进行一一描述,因此上述实施方式中的各种技术特征的任意进行组合也属于本说明书公开的范围。
本领域技术人员在考虑说明书及实践这里公开的说明书后,将容易想到本说明书实施例的其它实施方案。本说明书实施例旨在涵盖本说明书实施例的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本说明书实施例的一般性原理并包括本说明书实施例未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本说明书实施例的真正范围和精神由下面的权利要求指出。
应当理解的是,本说明书实施例并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本说明书实施例的范围仅由所附的权利要求来限制。
以上所述仅为本说明书实施例的较佳实施例而已,并不用以限制本说明书实施例,凡在本说明书实施例的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本说明书实施例保护的范围之内。

Claims (22)

  1. 一种音视频传输系统,其特征在于,所述系统包括音视频传输装置以及至少一个无线麦克风发射装置;
    所述无线麦克风发射装置用于将无线麦克风采集的音频信号发送给所述音视频传输装置;
    所述音视频传输装置用于分别与外部的视频采集装置和所述无线麦克风发射装置连接,接收所述音频信号,将所述音频信号传输给所述视频采集装置,以及从所述视频采集装置获取所述音频信号和视频信号生成的混合信号,并将所述混合信号处理后输出。
  2. 根据权利要求1所述的音视频传输系统,其特征在于,所述音视频传输装置按照时分复用的方式通过同一个无线通道传输所述音频信号和所述混合信号。
  3. 根据权利要求2所述的音视频传输系统,其特征在于,所述音频信号被传输的优先级高于所述混合信号被传输的优先级。
  4. 根据权利要求1所述的音视频传输系统,其特征在于,所述无线麦克风发射装置在将所述音频信号发送给所述音视频传输装置之前,采用低延时低损压缩技术对所述音频信号进行压缩,所述低延时低损压缩技术包括OPUS-CELT、ADPCM、LC3、LC3plus中的一种。
  5. 根据权利要求4所述的音视频传输装置,其特征在于,所述音频信号 的帧长被配置为小于指定长度。
  6. 根据权利要求1所述的音视频传输系统,其特征在于,所述系统还包括安装于一外部电子设备上的APP,所述音视频传输装置处理后的混合信号发送至所述APP,以使所述APP将所述处理后的混合信号解码后显示。
  7. 根据权利要求6所述的音视频传输系统,其特征在于,所述无线麦克风发射装置和所述音视频传输装置包括WIFI传输单元,所述音视频传输装置与所述无线麦克风发射装置和所述外部电子设备通过WIFI通道连接,所述音视频传输装置的工作模式为AP模式,所述无线麦克风发射装置和所述电子设备的工作模式为Station模式。
  8. 根据权利要求7所述的音视频传输系统,其特征在于,所述无线麦克风发射装置的WIFI传输单元的工作状态在休眠状态和唤醒状态之间交替切换。
  9. 根据权利要求8所述的音视频传输系统,其特征在于,所述无线麦克风发射装置的WIFI传输单元以预设时间间隔从休眠状态切换到唤醒状态,并在本次唤醒状态下执行完传输任务后,从所述唤醒状态切换到所述休眠状态;所述传输任务包括将所述音频信号发送给所述音视频传输装置,以及从所述音视频传输装置接收控制指令。
  10. 根据权利要求9所述的音视频传输系统,其特征在于,所述音视频传输装置在所述无线麦克风发射装置的WIFI传输单元工作状态为休 眠状态时,缓存所述控制指令。
  11. 根据权利要求7所述的音视频传输系统,其特征在于,所述无线麦克风发射装置的WIFI传输单元被配置为单发单收模式。
  12. 一种音视频传输装置,其特征在于,所述音视频传输装置分别与外部的视频采集装置和无线麦克风发射装置连接,所述音视频传输装置包括:
    音频传输单元,用于从所述无线麦克风发射装置接收无线麦克风采集的音频信号,并将所述音频信号传输至所述视频采集装置;
    视频传输单元,用于从所述视频采集装置获取所述音频信号和视频信号混合后的混合信号,并将所述混合信号输出。
  13. 根据权利要求12所述的音视频传输装置,其特征在于,所述音视频传输装置按照时分复用的方式通过同一个无线通道传输所述音频信号和所述混合信号。
  14. 根据权利要求13所述的音视频传输装置,其特征在于,所述音频信号被传输的优先级高于所述混合信号被传输的优先级。
  15. 根据权利要求12所述的音视频传输装置,其特征在于,所述音频信号通过低延时低损压缩技术压缩后发送给所述音频传输单元,所述低延时低损压缩技术包括OPUS-CELT、ADPCM、LC3、LC3plus中的一种。
  16. 根据权利要求15所述的音视频传输装置,其特征在于,压缩获得的音频信号的帧长小于指定长度。
  17. 根据权利要求12所述的音视频传输装置,其特征在于,所述音视频 传输装置还包括WIFI传输单元,所述音频传输单元通过所述WIFI传输单元从所述无线麦克风发射装置接收所述音频信号,所述视频传输单元通过所述WIFI传输单元发送所述混合信号。
  18. 根据权利要求17所述的音视频传输装置,其特征在于,所述音视频传输装置还包括中央处理单元,所述中央处理单元用于从所述视频传输单元接收所述混合信号中的音频信号和所述混合信号中的视频信号,分别对所述混合信号中音频信号和所述混合信号中的视频信号进行压缩编码后输出给所述WIFI传输单元,以及从所述WIFI传输单元接收无线麦克风发射装置发送的音频信号,转换成数字压缩音频信号后发送给所述音频传输单元。
  19. 根据权利要求18所述的音视频传输装置,其特征在于,所述音频传输单元包括音频处理子单元、采样子单元以及第一接口子单元,
    所述音频处理子单元用于对所述数字压缩音频信号进行解码处理;
    所述采样子单元用于对所述解码处理后的数字压缩音频信号进行采样恢复得到模拟音频信号;
    所述第一接口子单元用于将所述模拟音频信号输出给所述视频采集装置。
  20. 根据权利要求18所述的音视频传输装置,其特征在于,所述视频传输单元包括第二接口子单元和视频信号处理子单元;
    所述第二接口子单元用于从所述视频采集装置接收所述混合信号,并分离出所述混合信号中的音频信号和所述混合信号中的视频信号,将所述混合 信号中的音频信号传输给所述中央处理单元,将所述混合信号中的视频信号传输给所述视频信号处理子单元;
    所述视频信号处理子单元用于对所述混合信号中的视频信号进行处理后发送给所述中央处理单元,以进行压缩编码处理。
  21. 根据权利要求18所述的音视频传输装置,其特征在于,所述音视频传输装置还包括CCU控制接口,所述中央处理单元通过所述CCU控制接口对所述视频采集装置的CCU进行控制。
  22. 根据权利要求18所述的音视频传输装置,其特征在于,所述音视频传输装置还包括与所述中央处理单元连接的扩展接口,用于连接以下一种或多种配件:LCD屏幕、按键和电源。
PCT/CN2020/076673 2020-02-20 2020-02-25 音视频传输装置及音视频传输系统 WO2021164043A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/799,973 US11997420B2 (en) 2020-02-20 2020-02-25 Audio and video transmission devices and audio and video transmission systems

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202062979186P 2020-02-20 2020-02-20
US62/979,186 2020-02-20

Publications (1)

Publication Number Publication Date
WO2021164043A1 true WO2021164043A1 (zh) 2021-08-26

Family

ID=70829743

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/076673 WO2021164043A1 (zh) 2020-02-20 2020-02-25 音视频传输装置及音视频传输系统

Country Status (3)

Country Link
US (1) US11997420B2 (zh)
CN (2) CN111225173A (zh)
WO (1) WO2021164043A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115174840A (zh) * 2022-09-06 2022-10-11 深圳市掌锐电子有限公司 基于mipi信号数据传输的控制系统

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11997420B2 (en) * 2020-02-20 2024-05-28 Shenzhen Hollyland Technology Co., Ltd. Audio and video transmission devices and audio and video transmission systems
CN112151044B (zh) * 2020-09-23 2024-06-11 北京百瑞互联技术股份有限公司 在lc3音频编码器中自动调节蓝牙播放设备频响曲线的方法、装置及存储介质
CN114584648A (zh) * 2020-11-30 2022-06-03 华为技术有限公司 一种音频与视频同步的方法及设备
CN114697733B (zh) * 2020-12-31 2023-06-06 华为技术有限公司 投屏音视频数据的传输方法以及相关设备
CN112929606A (zh) * 2021-01-29 2021-06-08 世邦通信股份有限公司 音视频采集方法、装置和存储介质
CN113114295A (zh) * 2021-04-02 2021-07-13 深圳市卡卓无线信息技术有限公司 一种对讲机通信方法及对讲机
US20220398216A1 (en) * 2021-06-14 2022-12-15 Videon Central, Inc. Appliances and methods to provide robust computational services in addition to a/v encoding, for example at edge of mesh networks
CN114125363A (zh) * 2021-11-19 2022-03-01 深圳奥尼电子股份有限公司 一种具有无线麦克风蓝牙传输的音视频会议系统的控制方法
CN115086578A (zh) * 2022-06-21 2022-09-20 华东师范大学 一种基于ZYNQ Ultrascale+SOC的高速视频处理平台
CN115022665A (zh) * 2022-06-27 2022-09-06 咪咕视讯科技有限公司 直播制作方法、装置、多媒体处理设备及多媒体处理系统
CN116471216B (zh) * 2023-05-26 2024-02-27 南京国兆光电科技有限公司 一种用于mipi协议层验证的硬件平台及其验证方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105448146A (zh) * 2015-12-31 2016-03-30 北京智教通教育科技有限公司 一种全触控多媒体教学及视频自动录制一体化系统
US20170019580A1 (en) * 2015-07-16 2017-01-19 Gopro, Inc. Camera Peripheral Device for Supplemental Audio Capture and Remote Control of Camera
CN107424617A (zh) * 2017-05-31 2017-12-01 深圳耀麟国际商贸有限公司 一种多媒体数据处理设备及方法

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0775099A (ja) * 1993-05-07 1995-03-17 Philips Electron Nv マルチプレックス直交振幅変調テレビジョン送信用送信方式、送信機及び受信機
KR100188084B1 (ko) * 1995-05-12 1999-06-01 김광호 비디오 신호선을 이용한 오디오 데이타의 전달 장치 및 그 방법
CN1402521A (zh) * 2001-08-10 2003-03-12 陆黛丽 一种万用通信系统
KR20050002206A (ko) * 2003-06-30 2005-01-07 (주)아이앤에스티 무선 랜 카메라
US20060055771A1 (en) * 2004-08-24 2006-03-16 Kies Jonathan K System and method for optimizing audio and video data transmission in a wireless system
US20070242839A1 (en) * 2006-04-13 2007-10-18 Stanley Kim Remote wireless microphone system for a video camera
KR101281814B1 (ko) * 2006-10-11 2013-07-04 삼성전자주식회사 입체음향 기록이 가능한 촬영장치 및 그 방법
EP2103110B1 (en) * 2006-12-20 2014-03-26 GVBB Holdings S.A.R.L Embedded audio routing switcher
CN101360200A (zh) * 2008-09-02 2009-02-04 宝利微电子系统控股公司 使用遥控器采集音视频并将其传输给电视机的装置和方法
US20110115878A1 (en) * 2009-11-18 2011-05-19 Next Generation Reporting, LLC Oral and video proceedings collection and transcription device
US10080061B1 (en) * 2009-12-18 2018-09-18 Joseph F. Kirley Distributing audio signals for an audio/video presentation
CN102143346B (zh) * 2010-01-29 2013-02-13 广州市启天科技股份有限公司 一种巡航拍摄存储方法及系统
KR101208205B1 (ko) * 2012-04-03 2012-12-04 최용석 원격 영상 모니터링 시스템 및 그 운용방법
US10122963B2 (en) * 2013-06-11 2018-11-06 Milestone Av Technologies Llc Bidirectional audio/video: system and method for opportunistic scheduling and transmission
EP2899957A1 (en) * 2014-01-22 2015-07-29 Phonetica Lab S.R.L. System for integrating video calls in telephone call centers
WO2016140479A1 (ko) * 2015-03-01 2016-09-09 엘지전자 주식회사 방송 신호 송신 장치, 방송 신호 수신 장치, 방송 신호 송신 방법, 및 방송 신호 수신 방법
CN205430457U (zh) * 2016-03-29 2016-08-03 博瑞恒创(天津)科技有限公司 一种多媒体同步播放系统
US10158905B2 (en) * 2016-09-14 2018-12-18 Dts, Inc. Systems and methods for wirelessly transmitting audio synchronously with rendering of video
CN106992959B (zh) * 2016-11-01 2023-08-18 圆周率科技(常州)有限公司 一种3d全景音视频直播系统及音视频采集方法
US11997420B2 (en) * 2020-02-20 2024-05-28 Shenzhen Hollyland Technology Co., Ltd. Audio and video transmission devices and audio and video transmission systems

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170019580A1 (en) * 2015-07-16 2017-01-19 Gopro, Inc. Camera Peripheral Device for Supplemental Audio Capture and Remote Control of Camera
CN105448146A (zh) * 2015-12-31 2016-03-30 北京智教通教育科技有限公司 一种全触控多媒体教学及视频自动录制一体化系统
CN107424617A (zh) * 2017-05-31 2017-12-01 深圳耀麟国际商贸有限公司 一种多媒体数据处理设备及方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115174840A (zh) * 2022-09-06 2022-10-11 深圳市掌锐电子有限公司 基于mipi信号数据传输的控制系统

Also Published As

Publication number Publication date
US11997420B2 (en) 2024-05-28
US20230078451A1 (en) 2023-03-16
CN111225173A (zh) 2020-06-02
CN211702218U (zh) 2020-10-16

Similar Documents

Publication Publication Date Title
WO2021164043A1 (zh) 音视频传输装置及音视频传输系统
CN117376615B (zh) 一种投屏显示方法及电子设备
US10432773B1 (en) Wireless audio transceivers
CN112313929B (zh) 一种自动切换蓝牙音频编码方式的方法及电子设备
WO2021018187A1 (zh) 投屏方法及设备
WO2022062739A1 (zh) 一种多设备蓝牙配对的通信方法和系统
KR101436593B1 (ko) Rf 듀얼밴드를 이용한 무선 송수신 장치
WO2022135303A1 (zh) 一种tws耳机连接方法及设备
WO2021203647A1 (zh) 一种无线麦克风装置及其使用方法
CN101345938A (zh) 一种基于广播网的手机终端电视接收装置及其应用方法
KR200397845Y1 (ko) 무선 음향 신호 중계 장치
CN209861096U (zh) 一种多功能插卡蓝牙卡拉ok音响pcba板
CN116017614B (zh) 一种通信方法及电子设备
US20150072724A1 (en) Terminal for wireless voice communication
WO2022042151A1 (zh) 一种录音方法和设备
WO2023273763A1 (zh) 一种视频数据的传输方法及装置
CN103079142B (zh) 一种无线低音炮低音延时可调系统及方法
WO2022041681A1 (zh) 无线耳机、控制无线耳机的方法及存储介质
CN202077166U (zh) 一种同步调节音响及其应用之设备
CN114765831A (zh) 上行资源预申请的方法及相关设备
CN221710079U (zh) 一种多功能无线导游音频设备
US12126951B2 (en) Wireless microphone device and use method therefor
CN218941188U (zh) 一种带有导播功能的无线直播系统
CN111917478B (zh) 融合5g微基站的智能媒体终端
CN204377082U (zh) 无线控制多媒体音响

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20920333

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20920333

Country of ref document: EP

Kind code of ref document: A1