WO2017148178A1 - 一种音视频处理方法以及相关设备 - Google Patents

一种音视频处理方法以及相关设备 Download PDF

Info

Publication number
WO2017148178A1
WO2017148178A1 PCT/CN2016/105442 CN2016105442W WO2017148178A1 WO 2017148178 A1 WO2017148178 A1 WO 2017148178A1 CN 2016105442 W CN2016105442 W CN 2016105442W WO 2017148178 A1 WO2017148178 A1 WO 2017148178A1
Authority
WO
WIPO (PCT)
Prior art keywords
audio data
source device
sink device
auxiliary audio
auxiliary
Prior art date
Application number
PCT/CN2016/105442
Other languages
English (en)
French (fr)
Inventor
钟国杰
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2017148178A1 publication Critical patent/WO2017148178A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • H04N21/4363Adapting the video stream to a specific local network, e.g. a Bluetooth® network
    • H04N21/43632Adapting the video stream to a specific local network, e.g. a Bluetooth® network involving a wired protocol, e.g. IEEE 1394
    • H04N21/43635HDMI
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • H04N21/4363Adapting the video stream to a specific local network, e.g. a Bluetooth® network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams

Definitions

  • the present invention relates to the field of multimedia, and in particular, to an audio and video processing method and related equipment.
  • High Definition Multimedia Interface (English name: HDMI) is a digital video/audio interface technology.
  • the HDMI interface can transmit uncompressed audio data and high-resolution video data in a single transmission cable. Because the HDMI interface supports digital audio and video transmission, it can bring high-quality audio-visual enjoyment to users, so HDMI interface technology is more and more widely used in consumer electronic products.
  • the audio and video processing system includes the source device 101 and the sink device 102, and the microphone 103 connected to the source device 101 receives The song "The Vietnamese Plateau” sung by the singer, the microphone 103 converts the song “Qingzang Plateau” sung by the singer into auxiliary audio data and transmits it to the source device 101, and the mixing processor 104 of the source device receives the auxiliary audio data and the main Audio data, wherein the main audio data is audio data for accompaniment of the video data, the mixing processor 104 mixes the main audio data and the mixed audio data, and mixes the processed main audio data and the auxiliary audio data.
  • the audio data is sent to the synchronization processor 109, the synchronization processor 109 is further configured to receive video data, and the synchronization processor 109 is configured to synchronize the video data and the mixed main audio data and the auxiliary audio data, and the synchronization processing
  • the subsequent video data, the main audio data and the auxiliary audio data are sent to the HDMI interface 105, and the HDMI interface 105 of the source device 101 transmits the video data and the main audio.
  • the auxiliary audio data is sent to the HDMI interface 106 of the sink device, and the HDMI interface 106 transmits the video data, the main audio data and the auxiliary audio data to the synchronization processor 110 of the sink device 102, and the synchronization processor 110 again receives the received video data.
  • the main audio data and the auxiliary audio data are synchronously processed, and the synchronization processor 110 transmits the synchronized processed video data to the display 108, and the display 108 is configured to display the video data, and the synchronization processor 110 also synchronizes the processed data.
  • the main audio data and the auxiliary audio data are sent to a speaker 107 for playing the main audio data and the auxiliary audio data.
  • the direct sound delay refers to the difference between the time when the user hears the audio data and the time when the audio data starts to propagate.
  • the direct sound delay directly affects the sound quality of the sound emitted by the audio and video processing system. . If the direct sound delay is too large, the listener will feel a significant echo effect and the sound quality is very bad. Experiments show that the direct sound delay is less than 30 milliseconds, the sound quality is very good; the direct sound delay is within 30 ⁇ 80 milliseconds, the sound quality is acceptable; the direct sound delay is over 80 milliseconds, the sound quality is worse, the listener will There is a feeling of echoing, and the singer cannot feel the sound of his own voice in real time.
  • the synchronization processor 109 of the source device 101 and the synchronization processor 110 of the sink device 102 as shown in FIG. 1 need to synchronize the audio data, the main audio data and the auxiliary audio data, thereby increasing the direct access of the auxiliary audio data.
  • the acoustic delay may cause the direct sound delay of the auxiliary audio data to exceed 100 milliseconds, so that the sound played by the speaker 107 has obvious echo and the sound quality is poor.
  • the embodiment of the invention provides an audio and video processing method and related device capable of effectively reducing the direct sound delay of the auxiliary audio data.
  • a first aspect of the embodiments of the present invention provides an audio and video processing method, which is based on an audio and video playback system:
  • the audio and video playback system includes a source device and a sink device connected to the source device.
  • the source device refers to an audio and video output device with a high-definition multimedia interface (English name: High Definition Multimedia Interface, English abbreviation: HDMI).
  • the sink device indicates an audio and video receiving device with an HDMI interface.
  • the source device is connected to the sink device in HDMI mode.
  • both the source device and the sink device support the HDMI protocol.
  • the HDMI2.0 and HDMI2.0 and above protocols support two channels of audio streaming, which are primary audio data and secondary audio data.
  • the karaoke mode is added to the existing HDMI2.0 and HDMI2.0 or higher protocols.
  • Table 1 For details, see Table 1:
  • the video data is MV video data
  • the main audio data may be audio data used for accompaniment of the MV video data
  • the auxiliary audio data may be an audio data sung by the user as an example.
  • the source device needs to keep the main audio data, the auxiliary audio data, and the video data synchronized, and the sink device also needs to maintain the main audio data, the auxiliary audio data, and the video. data synchronization.
  • the source device needs to keep the main audio data and the video data synchronized, and the source device does not need to keep the video data and the auxiliary audio data synchronized, and the sink device needs to keep the main audio data and the video data synchronized.
  • the sink device does not need to keep the auxiliary audio data synchronized with the video data.
  • the source device connects to the sink device by using HDMI
  • the HDMI interface of the source device is connected to the HDMI interface of the sink device through an HDMI cable.
  • the source device determines the type of service supported by the sink device, that is, the source device determines the type of service supported by the sink device. Is it normal mode or karaoke mode;
  • the sink device sends mode information that can indicate the type of service supported by the sink device to the source device.
  • the mode information shown in this embodiment can indicate that the sink device supports the normal mode, or the mode information can indicate that the sink device supports the karaoke mode.
  • the source device may send, to the sink device, a message for requesting the sink device to send the mode information, and when the sink device receives the message for requesting the sink device to send the mode information, the sink device may send the mode information to the source device;
  • the source device can determine the type of service supported by the sink device.
  • the sink device can automatically send the mode information to the source device.
  • the source device can determine the type of service that the sink device can support according to the mode information, that is, the source device can determine whether the sink device supports the normal mode or the karaoke mode.
  • the sink device uses the mode information to notify the sink device to support the karaoke mode.
  • the sink device indicates the first processing mode by using the first identifier included in the mode information, where the first processing mode is The karaoke mode shown in Table 1.
  • the audio and video processing method further includes:
  • the source device receives the auxiliary audio data through the external device
  • the external device is a device connected to the source device
  • the external device can be a microphone device.
  • the source device may only need to keep the video data and the main audio data synchronized without synchronizing the auxiliary audio data.
  • the source device transmits the unsynchronized auxiliary audio data, the synchronized processed main audio data, and the video data to the sink device through the HDMI interface.
  • the sink device synchronizes the received main audio data and the video data, and the sink device does not need to synchronize the auxiliary audio data with the video data.
  • the sink device is capable of displaying video data, and the sink device is also capable of playing secondary audio data and primary audio data synchronized with the video data.
  • the source device and the sink device support the karaoke mode, the source device and the sink device do not synchronize the auxiliary audio data and the video data, thereby reducing the auxiliary audio.
  • the direct sound delay of the data so that the auxiliary audio data played by the sink device does not allow the user to hear the obvious echo, effectively guaranteeing the sound quality of the audio played by the sink device.
  • the mode information is a list of auxiliary audio types
  • the auxiliary audio type list shown in Table 2 includes the correspondence between the identifier and the audio and video data processing manner.
  • the identification shown in Table 2 is 3-bit binary data.
  • the audio and video data processing methods corresponding to the second identifiers “000”, “001”, “010”, “011”, and “100” in the auxiliary audio type list shown in Table 2 are existing HDMI 2.0 and
  • the HDMI 2.0 or higher protocol stipulates that the source device sends two channels of audio data to the sink device, one for the primary audio data and the other for the secondary audio data, wherein the source device and the sink device need to maintain the video data, the primary audio data, and the secondary device. Synchronization of audio data.
  • the first identifier “101” is added to the auxiliary audio type list identifier reservation field shown in Table 2, and the first processing corresponding to the first identifier is added in the reserved extension field of the audio and video data processing manner.
  • the first processing mode is to not synchronize the auxiliary audio data.
  • the sink device notifies the source device that the secondary audio data is not synchronized by the first processing mode, that is, the sink device indicates that the karaoke mode is supported by the first processing mode.
  • the source device receives the auxiliary audio type list sent by the sink device through the HDMI interface.
  • the source device can determine that the sink device supports the karaoke mode by using the auxiliary audio type list, so that the source device can maintain the synchronization of the auxiliary audio data and the video data, thereby effectively reducing the auxiliary audio data and the source device.
  • the direct sound delay caused by the synchronization of the video data effectively reduces the direct sound delay of the auxiliary audio data measured by the source device.
  • the source device After the source device receives the auxiliary audio type list, the source device needs to determine whether the source device and the sink device support the first processing mode.
  • the source device may determine that the sink device supports the first processing mode
  • the source device is pre-stored with a capability set, where the capability set stores a processing mode in which the active device can process the video data, the auxiliary audio data, and the main audio data.
  • the source device may determine that the source device supports the first processing mode, and if the source device determines that the first identifier is not stored in the capability set, the source device It can be determined that the source device does not support the first processing mode.
  • the source device determines that the source device supports the first processing mode, the source device sends response information to the sink device through the HDMI interface, and the sink device can determine, by using the response information, the sink device to receive from the source device.
  • the auxiliary audio data is not synchronized.
  • the response information may be a first identifier corresponding to the first processing mode
  • the response information may be information that includes the target content, and the target content is that the sink device does not perform synchronization processing on the auxiliary audio data received from the source device.
  • the source device indicates that the source device supports the karaoke mode by using the response information, and the sink device can perform the karaoke mode, that is, the sink device can synchronize the auxiliary audio data with the audio data, thereby effectively reducing the auxiliary audio.
  • the source device If the source device does not support the synchronization processing of the auxiliary audio data, the source device performs synchronization processing on the video data, the main audio data, and the auxiliary audio data;
  • the source device transmits the synchronized auxiliary audio data, the main audio data, and the video data to the sink device through the HDMI interface.
  • a second aspect of the embodiments of the present invention provides an audio and video data processing method, where the method includes:
  • the sink device sends the configured mode information to the source device through the high definition multimedia interface HDMI Prepared
  • the sink device receives the unaudited auxiliary audio data, the synchronized main audio data, and the video data that are sent by the source device through the HDMI interface;
  • the sink device in this embodiment can support the case where the service type is the karaoke mode, and the sink device synchronizes the video data and the main audio data, and the sink device does not need to keep the auxiliary audio data and the video data synchronized.
  • the sink device displays the video data
  • the sink device displays the video data through the display of the sink device.
  • the sink device plays the secondary audio data and primary audio data synchronized with the video data.
  • the sink device plays the auxiliary audio data and the main audio data synchronized with the video data through the speaker of the sink device.
  • the source device and the sink device do not synchronize the auxiliary audio data and the video data, thereby reducing the direct sound delay of the auxiliary audio data, thereby causing the sink device to play.
  • the auxiliary audio data will not let the user hear the obvious echo, which effectively guarantees the sound quality of the audio played by the sink device.
  • the sink device determines whether to support the synchronization processing of the auxiliary audio data by using the pre-stored capability set in the HDMI interface, and the capability set includes supporting the synchronization processing of the auxiliary audio data or supporting the synchronization processing of the auxiliary audio data;
  • the sink device may determine that the sink device can support the synchronization processing of the auxiliary audio data.
  • the sink device may determine that the sink device cannot support the synchronization processing of the auxiliary audio data.
  • the sink device determines that the secondary audio data is not synchronized by using the pre-stored capability set in the HDMI interface, the sink device sends the mode information including the first identifier to the source device.
  • the sink device notifies the source device by the first identifier, and the sink device supports the karaoke mode.
  • the sink device determines, according to the response information, that the source device supports the karaoke mode.
  • the source device indicates that the source device supports the karaoke mode by using the response information, and the sink device can perform the karaoke mode, that is, the sink device can synchronize the auxiliary audio data with the audio data, thereby effectively reducing the auxiliary audio.
  • the mode information is configured by the sink device, and the mode information is a list of auxiliary audio types, and the list of the auxiliary audio types is shown in Table 3.
  • the source device can notify the source device by using the auxiliary audio type list, and the sink device supports the karaoke mode, so that the source device can maintain the synchronization of the auxiliary audio data and the video data, thereby effectively reducing the source device detection and maintenance.
  • the direct sound delay caused by the synchronization of the audio data and the video data effectively reduces the direct sound delay of the auxiliary audio data measured at the source device.
  • a third aspect of the embodiments of the present invention provides a source device, including a high definition multimedia interface HDMI, a processor, and a memory;
  • An HDMI interface configured to receive mode information sent by the sink device, where the mode information includes a first identifier, where the first identifier is used to indicate that the source device does not perform synchronization processing on the auxiliary audio data received from the external device;
  • a processor configured to determine whether the source device itself supports synchronization processing of the auxiliary audio data
  • a memory for storing video data and main audio data
  • the processor is further configured to: if the source device supports the synchronization processing of the auxiliary audio data, the processor performs synchronization processing on the video data and the main audio data;
  • the HDMI interface is further configured to send the unsynchronized auxiliary audio data and the synchronized main audio data and video data to the sink device.
  • the HDMI interface is further configured to pre-store a capability set, where the capability set includes supporting synchronization processing of the auxiliary audio data or supporting synchronization processing of the auxiliary audio data;
  • the processor is further configured to determine whether to support the synchronization processing of the auxiliary audio data by using a pre-stored capability set in the HDMI interface.
  • the HDMI interface is further used to: if the source device supports the auxiliary device The audio data is not synchronized, and the response information is sent to the sink device, and the response information is used to instruct the sink device not to perform synchronization processing on the auxiliary audio data received from the source device.
  • the mode information further includes a second identifier, where the second identifier is used to indicate that the source device performs synchronization processing on the auxiliary audio data received from the external device;
  • the processor is further configured to: if the source device does not support the synchronization processing of the auxiliary audio data, perform synchronous processing on the video data, the main audio data, and the auxiliary audio data;
  • the HDMI interface is further configured to send the synchronized auxiliary audio data, the main audio data, and the video data to the sink device.
  • a fourth aspect of the embodiments of the present invention provides a sink device, including a high definition multimedia interface HDMI, a processor, and a player;
  • An HDMI interface configured to send mode information to the source device, where the mode information includes a first identifier, where the first identifier is used to indicate that the source device does not perform synchronization processing on the auxiliary audio data received from the external device;
  • the HDMI interface is further configured to receive the unaudited auxiliary audio data and the synchronized main audio data and video data sent by the source device;
  • a player for playing unsynchronized auxiliary audio data and synchronized main audio data and video data is a player for playing unsynchronized auxiliary audio data and synchronized main audio data and video data.
  • the HDMI interface is further configured to pre-store a capability set, where the capability set includes supporting synchronization processing or support for the auxiliary audio data. Synchronizing the auxiliary audio data;
  • the processor is further configured to determine, by using a pre-stored capability set in the HDMI interface, whether to support the synchronization processing of the auxiliary audio data;
  • the HDMI interface is further configured to: if the processor determines, by using the capability set, that the auxiliary audio data is not synchronized, sending the mode information including the first identifier to the source device.
  • the HDMI interface is further configured to receive the response sent by the source device.
  • the information, the response information is used to instruct the sink device not to perform synchronization processing on the auxiliary audio data received from the source device.
  • the sink device of any one of the fourth aspect of the embodiments of the present invention the sink device of any one of the fourth aspect of the embodiments of the present invention
  • the processor is further configured to: configure mode information, where the mode information further includes a second identifier, where the second identifier is used to indicate that the source device performs synchronization processing on the auxiliary audio data received from the external device;
  • the HDMI interface is further configured to: if the source device does not support the synchronization processing of the auxiliary audio data, receive the synchronized auxiliary audio data, the main audio data, and the video data that are sent by the source device through the HDMI interface;
  • the processor is further configured to perform synchronous processing on the video data, the main audio data, and the auxiliary audio data;
  • the player is also used to play synchronized audio data, main audio data and video data.
  • the embodiment of the present invention provides an audio and video data processing method, a related device, and a playback system.
  • the audio and video data processing method includes: receiving, by the source device, mode information sent by the sink device, where the mode information includes a first identifier, where the first identifier is used by Instructing the source device to not perform synchronization processing on the auxiliary audio data received from the external device, the source device receiving the auxiliary audio data through the external device, the source device synchronizing the video data and the main audio data, and the source device passes the HDMI interface
  • the unaudited auxiliary audio data, the synchronized main audio data, and the video data are transmitted to the sink device.
  • the source device does not synchronize the video data with the auxiliary audio data, but separately sends the auxiliary audio data and the main audio data synchronized with the video data to
  • the sink device does not need to synchronize the auxiliary audio data with the video data, thereby reducing the direct sound delay of the auxiliary audio data, so that the auxiliary audio data played by the sink device does not allow The user hears the obvious echo, which effectively guarantees the sound quality of the audio played by the sink device.
  • FIG. 1 is a schematic diagram of audio and video data shown in the prior art propagated through an audio and video data processing system
  • FIG. 2 is a schematic structural diagram of an embodiment of an audio and video data playing system provided by the present invention.
  • FIG. 3 is a flow chart of steps of an embodiment of an audio and video data processing method according to the present invention.
  • FIG. 4 is a schematic structural diagram of an embodiment of a source device according to the present invention.
  • FIG. 5 is a schematic structural diagram of an embodiment of a sink device according to the present invention.
  • the audio and video data playing system shown in this embodiment includes a source device and a sink device connected to the source device.
  • the source device refers to an audio and video data output device with a high-definition multimedia interface (English name: High Definition Multimedia Interface, English abbreviation: HDMI).
  • the source device can be a DVD player, a set top box STB, a Blu-ray DVD.
  • the sink device is an audio and video data receiving device with an HDMI interface.
  • the sink device can be a television that supports the HDMI interface.
  • the source device is connected to the sink device through an HDMI cable.
  • HDMI interface of the source device and the HDMI interface of the sink device shown in this embodiment are connected by an HDMI cable.
  • the advantage of using the HDMI interface for source and sink device connections is that the source device can simultaneously transmit audio data and video data to the sink device.
  • HDMI interface is a digital video data / audio interface technology
  • HDMI interface is a dedicated digital interface suitable for image transmission, which can transmit audio and video data at the same time, the highest data transmission speed is 5Gbps. There is no need to perform digital/analog or analog/digital conversion before data transfer.
  • the HDMI interface can be combined with broadband digital content protection (English name: High-bandwidth Digital Content Protection, English abbreviation: HDCP) to prevent unauthorized copying of copyrighted audio and video content.
  • broadband digital content protection English name: High-bandwidth Digital Content Protection, English abbreviation: HDCP
  • the audio data and video data of the HDMI interface are transmitted by minimizing the transmission differential data (English name: Transition-minimized differential signaling, English abbreviation: TMDS).
  • TMDS Transition-minimized differential signaling
  • Step 301 The source device receives the auxiliary audio data by using the external device.
  • the audio and video data playing system shown in this embodiment further includes an external device 201.
  • the external device 201 is a device connected to the source device and used to receive an audio signal.
  • the external device 201 can be a microphone device, an electronic game instrument, or the like.
  • Electronic game instruments can be used for guitars, pianos, and other musical instruments.
  • the external device 201 is not limited in this embodiment, as long as the external device 201 can receive the audio signal input by the user.
  • the external device 201 is a microphone device
  • the user inputs an audio signal through the microphone device, and the microphone device can process the audio signal that has been input to form the auxiliary audio data.
  • the audio signal input by the user through the microphone may be voice data of the voice or a voice of singing.
  • the audio signal input by the user through the electronic game instrument may be the sound of operating the electronic game instrument.
  • the audio and video data processing method is applied to the karaoke scene as an example, that is, the external device is a microphone device, and the user inputs the sung audio signal through the microphone device, and the microphone device will The audio signal is converted into auxiliary audio data, and the converted audio data is sent to the source device as an example.
  • Step 302 The source device processes the auxiliary audio data.
  • the auxiliary audio data processor 207 of the source device receives the auxiliary audio data sent by the external device 201.
  • the secondary audio data processor 207 processes the secondary audio data.
  • the auxiliary audio data processor 207 can perform different processing on the auxiliary audio data in different application scenarios, and the specific processing manner is not limited in this embodiment.
  • the auxiliary audio data processor 207 can perform karaoke sound processing on the auxiliary audio data.
  • the secondary audio data processor 207 transmits the processed secondary audio data to the HDMI interface 205 of the source device.
  • Step 303 The sink device configuration mode information.
  • the HDMI interface 208 of the sink device configures mode information, and the mode information is used to indicate whether the source device performs synchronization processing on the auxiliary audio data received from the external device.
  • the mode information is a list of auxiliary audio types.
  • the sink device notifies the application scenario of the source device audio and video data through the auxiliary audio type list, so that the source device determines how to process the audio and video data according to the auxiliary audio type list.
  • the auxiliary audio type list shown in this embodiment is an improvement based on the auxiliary audio type list shown in the prior art, so that the auxiliary audio type list improved by the embodiment enables the source device to determine whether the sink device is Supports application scenarios that do not synchronize the secondary audio data.
  • the auxiliary audio type list shown in the prior art includes the correspondence between the identifier and the audio and video data processing manner.
  • the identification shown in the prior art is 3-bit binary data.
  • the auxiliary audio type list includes a correspondence between the second identifier and the second processing mode.
  • the second processing mode is to perform synchronization processing on the auxiliary audio data.
  • the second processing mode corresponding to the second identifiers "000”, “001”, “010”, “011”, and “100” in the auxiliary audio type list shown in the prior art is the existing HDMI 2
  • the existing HDMI 2.X protocol for details, which will not be described in this embodiment.
  • HDMI 2.X protocol shown in this embodiment is defined by a protocol of HDMI 2.0 or higher.
  • X is a natural number greater than or equal to zero.
  • the identifier included in the auxiliary audio type list provided by the prior art is determined as the second identifier, and the audio and video data processing manner corresponding to the second identifier shown in the prior art is determined as the second processing mode. That is, the auxiliary audio type list shown in the prior art includes the correspondence between the second identifier and the second processing mode.
  • the second processing mode provided by the prior art is not described in this embodiment.
  • the second processing mode is to perform synchronous processing on the auxiliary audio data.
  • the sink device configures the auxiliary audio type list
  • the existing HDMI 2.X protocol A new identifier is added to the identifier reserved field of the configuration auxiliary audio type list specified by the conference, and the audio and video data processing method corresponding to the new identifier is added in the reserved field of the audio and video data processing mode, and the following is shown in Table 3
  • Table 3 A list of auxiliary audio types provided by the embodiment is explained:
  • the audio and video data processing methods corresponding to the second identifiers "000”, “001”, “010”, “011”, and “100" in the auxiliary audio type list shown in this embodiment are compatible with the existing HDMI 2.X protocol.
  • the specified remains unchanged, that is, the auxiliary audio type list shown in this embodiment does not change the audio and video data processing methods already specified by the existing HDMI 2.X protocol.
  • the first identifier “101” is added in the auxiliary audio type list identifier reservation field in the HDMI 2.x protocol, and the first identifier corresponding to the first identifier is added in the reserved extension field of the audio and video data processing manner.
  • the first processing mode is to not perform synchronization processing on the auxiliary audio data.
  • the first identifier is not limited in this embodiment, as long as the first identifier added in this embodiment is inconsistent with the second identifier already specified by the existing HDMI 2.X protocol.
  • the sink device notifies the source device that the secondary audio data is not synchronized by the first processing identifier.
  • step 303 there is no sequential relationship between step 303 and step 301 to step 302 in this embodiment.
  • Step 304 The sink device sends the auxiliary audio type list to the source device.
  • the HDMI interface 208 of the sink device transmits the list of auxiliary audio types to the HDMI interface 205 of the source device through the HDMI cable.
  • the sink device needs to notify the source device by using the auxiliary audio type list, and whether the sink device supports the synchronization processing of the auxiliary audio data.
  • the sink device shown in this embodiment needs to determine whether the sink device supports synchronization processing of the auxiliary audio data.
  • the HDMI interface 208 of the sink device may pre-store a set of capabilities.
  • the capability set of the sink device stores a processing mode in which the sink device can process the audio data, the auxiliary audio data, and the main audio data.
  • the sink device may send the auxiliary audio type list including the first identifier and the first processing mode to the source device.
  • the sink device may send the auxiliary audio type list including the second identifier and the second processing mode to the source device.
  • Step 305 The source device receives a list of auxiliary audio types.
  • the HDMI interface 205 of the source device receives the auxiliary audio type list sent by the HDMI interface 208 of the sink device through the HDMI cable.
  • Step 306 The source device determines whether the first identifier is included in the auxiliary audio type list. If yes, step 307 is performed, and if no, step 308 is performed.
  • the source device determines whether the sink device supports the first processing mode.
  • the source device determines that the auxiliary audio type list includes the first identifier, the source device determines that the sink device supports the first processing mode, and if the source device determines that the auxiliary audio type list does not include the first identifier, the source device determines the sink device.
  • the second processing mode is supported.
  • Step 307 The source device determines whether it supports the synchronization processing of the auxiliary audio data. If not, step 308 is performed, and if yes, step 311 is performed.
  • the HDMI interface 205 of the source device is pre-stored with a capability set.
  • a collection of capabilities of the source device stores a processing mode in which the active device is capable of processing audio data, secondary audio data, and primary audio data.
  • the first processing mode and/or the second processing mode may be stored in the capability set, where specific descriptions of the first processing mode and the second processing mode are as described above, and are not specifically Narration.
  • the source device may determine that the source device supports the first processing mode, and if the source device determines that the first processing mode is not included in the capability set, the source device is It can be determined that the source device does not support the first processing mode.
  • the source device performs step 306 to determine that the first identifier is not included in the auxiliary audio type list, and/or the source device performs step 307 to determine that the source device itself does not support the synchronization processing of the auxiliary audio data. Then step 308 is performed.
  • Step 308 The source device processes the audio and video data according to the second processing mode.
  • the second processing mode is an audio and video data processing method that has been stipulated by the existing HDMI 2.X protocol, and is not specifically described in this embodiment.
  • the source device may process the audio and video data according to the second processing mode.
  • the source device may process the audio and video data according to the second processing mode.
  • the source device determines that the auxiliary audio type list sent by the sink device includes the first identifier and the second identifier, but the source device has only the second processing mode included in the capability set, and the source device can press the second processing mode. Processing audio and video data.
  • Step 309 The source device sends the synchronized auxiliary audio data, the main audio data, and the video data to the sink device through the HDMI interface.
  • the source device determines that the source device does not support the synchronization processing of the auxiliary audio data received from the external device, and/or the source device determines that the sink device does not support the synchronization processing of the auxiliary audio data, where the source device is
  • the synchronized auxiliary audio data, the main audio data, and the video data processed in the second processing mode may be transmitted to the sink device.
  • Step 310 The sink device receives the synchronously processed auxiliary audio data, the main audio data, and the video data sent by the source device, and sends the data to the sink device.
  • the HDMI interface 208 of the sink device can receive the HDMI signal sent by the HDMI cable, and the HDMI interface 208 of the sink device can acquire the auxiliary audio data, the main audio data, and the video data included in the HDMI signal.
  • Step 311 The sink device processes the auxiliary audio data, the main audio data, and the video data according to the second processing mode.
  • the sink device performs the synchronization processing on the auxiliary audio data, the main audio data, and the video data according to the prior art, which is not specifically described in this embodiment.
  • Step 312 The source device processes the audio and video data according to the first processing mode.
  • the source device can synchronize the video data and the main audio data.
  • the memory 203 of the source device stores video data and main audio data.
  • the video data is MV video data
  • the main audio data may be audio data used for accompaniment of the MV video data
  • the video data decoder 202 of the source device acquires video data from the memory 203 and decodes the acquired video data.
  • Video data decoder 202 transmits the decoded video data to synchronization processor 204.
  • the main audio data decoder 205 of the source device acquires main audio data from the memory 203 and decodes the acquired main audio data.
  • the main audio data decoder 205 transmits the decoded video data to the synchronization processor 204.
  • the synchronization processor 204 synchronizes the video data with the main audio data to synchronize the video data with the main audio data.
  • This embodiment specifically synchronizes the video data with the main audio data for the synchronization processor 204.
  • the processing is not limited, as long as the synchronized video data is synchronized with the main audio data.
  • the synchronization processor 204 performs a delay buffering process on the main audio data to synchronize the video data with the main audio data.
  • the synchronization processor 204 transmits the synchronized processed video data and the main audio data to the HDMI interface 205 of the source device.
  • Step 313 The source device sends the unaudited auxiliary audio data and the synchronized processed main audio data and video data to the sink device through the HDMI interface.
  • the source device may not synchronize the secondary audio data with the video data, and the secondary audio data that has not been synchronized and the synchronized processed The main audio data and video data are sent to the sink device.
  • the HDMI interface 205 of the source device shown in this embodiment can transmit the auxiliary audio data, the main audio data, and the video data to the HDMI interface 208 of the sink device through the HDMI cable in the form of an HDMI signal.
  • the source device shown in this embodiment does not synchronize the auxiliary audio data with the video data, thereby reducing the direct sound delay of the auxiliary audio data on the source device side.
  • Step 314 The source device sends response information to the sink device.
  • the response information is used to instruct the sink device not to perform synchronization processing on the auxiliary audio data received from the source device.
  • the source device can send the response information to the sink device in two ways.
  • the source device may query the auxiliary audio type list shown in Table 3, and the source device may determine the first A first identifier corresponding to the processing mode.
  • the first identifier is the response information
  • the HDMI interface 205 of the source device sends the first identifier to the HDMI interface 208 of the sink device through the HDMI cable.
  • the source device may generate response information including the target content, where the target content is the indication sink device pair.
  • the secondary audio data received from the source device is not synchronized, so that when the sink device receives the response information including the target content, the sink device may determine that the secondary audio data received from the source device is not Perform synchronization processing.
  • Step 315 The sink device receives the response information sent by the source device.
  • step 312. For details about the response information, see step 312. The details are not described in this step.
  • Step 316 The sink device determines, according to the response information, that the source device supports the first processing mode.
  • the sink device may determine the first processing mode corresponding to the first identifier according to the auxiliary audio type list, and the sink device may determine that the source device supports the first processing mode.
  • the sink device may determine that the secondary audio data received from the source device is not synchronized according to the target content included in the response information.
  • the HDMI interface 208 can transmit video data to the video data processor 209.
  • This embodiment does not limit how the video data processor 209 specifically processes the video data. In different application scenarios, the video data processor 209 can perform different processing on the video data.
  • the video data processor 209 can perform beautification processing or the like on the video data.
  • the video data processor 209 is capable of transmitting the processed video data to the synchronization processor 211 of the sink device.
  • the HDMI interface 208 can transmit the main audio data to the audio processor 210.
  • This embodiment does not limit how the audio processor 210 specifically processes the main audio data. In different application scenarios, the audio processor 210 can perform different processing on the main audio data.
  • the audio processor 210 can perform volume adjustment of the main audio data, and the like.
  • the audio processor 210 is capable of transmitting the processed main audio data to the synchronization processor 211 of the sink device.
  • the sink device determines that the sink device and the source device support the synchronization processing of the auxiliary audio data, and the synchronization processor 211 of the sink device only needs to perform synchronization processing on the main audio data and the video data.
  • the synchronization processor 211 is configured to perform synchronization processing on the video data and the main audio data to synchronize the synchronized processed video data with the main audio data.
  • the synchronization processor 211 transmits the main audio data that has been synchronously processed to the mixer 212.
  • Step 317 The mixer of the sink device receives the main audio data and the auxiliary audio data.
  • the HDMI interface of the sink device sends the auxiliary audio data to the mixer.
  • the HDMI interface 208 of the sink device when the HDMI interface 208 of the sink device receives the auxiliary audio data, the HDMI interface 208 does not process the auxiliary audio data, that is, the HDMI interface 208 directly sends the auxiliary audio data to the mixer 212.
  • the synchronization processor 211 of the sink device transmits the synchronized main audio data to the mixer.
  • the sink device does not need to synchronize the video data with the auxiliary audio data, thereby effectively reducing the direct sound delay of the auxiliary audio data in the sink device.
  • Step 318 The mixer of the sink device sends the main audio data and the auxiliary audio data to the speaker.
  • the mixer 212 is capable of performing mixing processing on the received main audio data and the mixed audio data, and transmitting the mixed main audio data and the auxiliary audio data to the speaker 214.
  • Step 319 The sink device displays video data.
  • the synchronization processor 211 transmits the processed video data to the display 213, thereby enabling the display 213 to display the video data.
  • the display 213 displays the MV video data.
  • the mixer 212 transmits the mixed audio data to the speaker 214.
  • the speaker 214 can play the main audio data accompanies the MV video data and synchronized with the MV audio data, and the speaker 214 can also play the auxiliary audio data transmitted by the external device 201.
  • the direct sound delay of the auxiliary audio data shown in this embodiment is the delay T0 of the auxiliary audio data processed on the source device side, and the auxiliary audio data is sent by the HDMI interface 205 of the source device to the sink device.
  • the delay T1 of the HDMI interface 208 and the sum of the delay T2 of the auxiliary audio data being mixed on the sink device side.
  • T0 15ms
  • T1 5ms
  • T2 10ms
  • the direct sound delay of the auxiliary audio data is 30ms.
  • T0, T1, and T2 in this embodiment are possible examples.
  • the values of T0, T1 and T2 may also be different depending on the equipment and the environment in which they are used.
  • the source device and the sink device do not synchronize the auxiliary audio data and the video data, thereby reducing the direct sound delay of the auxiliary audio data, thereby making the sink device
  • the played auxiliary audio data does not allow the user to hear the obvious echo, which effectively guarantees the sound quality of the audio played by the sink device.
  • FIG. 3 illustrates an audio and video processing method according to an embodiment of the present invention.
  • a source device according to an embodiment of the present invention is described below with reference to the embodiment shown in FIG. 4, wherein FIG. 4 provides The source device can support the audio and video processing method shown in FIG.
  • the source device includes a high definition multimedia interface HDMI 401, a processor 402, and a memory 403.
  • the source device may generate a large difference due to different configurations or performances, and may include one or more processors 402.
  • Memory 403 can be short-lived or persistent.
  • the processor 402 is connected to the memory 403 and the high definition multimedia interface HDMI 401 via a bus.
  • the HDMI interface 401 is configured to receive mode information sent by the sink device, where the mode information includes a first identifier, where the first identifier is used to indicate that the source device does not perform synchronization processing on the auxiliary audio data received from the external device.
  • the processor 402 is configured to determine whether the source device itself supports synchronization processing of the auxiliary audio data.
  • a memory 403 configured to store video data and main audio data
  • the processor 402 is further configured to: if the source device supports the synchronization processing of the auxiliary audio data, the processor performs synchronization processing on the video data and the main audio data;
  • the HDMI interface 401 is further configured to send the unaudited auxiliary audio data and the synchronized processed main audio data and video data to the sink device.
  • the HDMI interface 401 is further configured to pre-store a capability set, where the capability set includes supporting synchronization processing of the auxiliary audio data or supporting synchronization processing of the auxiliary audio data;
  • the processor 402 is further configured to determine, by using a pre-stored capability set in the HDMI interface, Whether to support the auxiliary audio data without synchronization processing.
  • the HDMI interface 401 is further configured to: if the source device supports the synchronization processing of the auxiliary audio data, send the response information to the sink device, where the response information is used to indicate that the sink device does not receive the auxiliary audio data received from the source device. Perform synchronization processing.
  • the processor 402 is further configured to perform synchronization processing on the video data, the main audio data, and the auxiliary audio data if the source device does not support the synchronization processing of the auxiliary audio data;
  • the HDMI interface 401 is further configured to send the synchronized auxiliary audio data, the main audio data, and the video data to the sink device.
  • the source device shown in this embodiment can implement the audio and video data processing method shown in FIG. 3, and the process of implementing the audio and video processing method is shown in FIG. 3, which is not specifically described in this embodiment.
  • the source device provided by the embodiment can reduce the direct sound delay of the auxiliary audio data, so that the auxiliary audio data played by the sink device does not allow the user to hear the obvious echo, and effectively protects the audio played by the sink device. Sound quality.
  • FIG. 3 illustrates an audio and video processing method according to an embodiment of the present invention.
  • a sink device according to an embodiment of the present invention is described below with reference to the embodiment shown in FIG.
  • the sink device can support the audio and video processing method shown in FIG.
  • the source device includes a high definition multimedia interface HDMI 501, a processor 502, and a player 503.
  • the sink device may generate a large difference due to different configurations or performances, and may include one or more processors 502.
  • the player 503 may be a display screen for playing video data, and a speaker for playing main audio data and auxiliary audio data.
  • the processor 502 is respectively connected to the high definition multimedia interface HDMI 501 and the player 503 via a bus.
  • the HDMI interface 501 is configured to send mode information to the source device, where the mode information includes a first identifier, where the first identifier is used to indicate that the source device does not perform synchronization processing on the auxiliary audio data received from the external device;
  • the HDMI interface 501 is further configured to: receive the unaudited auxiliary audio data and the synchronized processed main audio data and video data that are sent by the source device;
  • the processor 502 is configured to perform synchronous processing on the main audio data and the video data.
  • the player 503 is configured to play the unaudited auxiliary audio data and the synchronized main audio data and video data.
  • the HDMI interface 501 is further configured to: pre-store a capability set, where the capability set includes supporting synchronization processing of the auxiliary audio data or supporting synchronization processing of the auxiliary audio data;
  • the processor 502 is further configured to determine, by using a pre-stored capability set in the HDMI interface, whether synchronization processing of the auxiliary audio data is supported;
  • the HDMI interface 501 is further configured to: if the processor determines, by using the capability set, that the auxiliary audio data is not synchronized, sending the mode information including the first identifier to the source device.
  • the HDMI interface 501 is further configured to: receive response information sent by the source device, where the response information is used to indicate that the sink device does not perform synchronization processing on the auxiliary audio data received from the source device.
  • the processor 502 is further configured to: configure mode information, where the mode information further includes a second identifier, where the second identifier is used to indicate that the source device performs synchronization processing on the auxiliary audio data received from the external device;
  • the HDMI interface 501 is further configured to: if the source device does not support the synchronization processing of the auxiliary audio data, receive the synchronized processed auxiliary audio data, the main audio data, and the video data that are sent by the source device through the HDMI interface;
  • the processor 502 is further configured to perform synchronization processing on the video data, the main audio data, and the auxiliary audio data;
  • the player 503 is further configured to play the synchronized audio data, the main audio data, and the video data.
  • the sink device shown in this embodiment can implement the audio and video data processing method shown in FIG. 3, and the process for implementing the audio and video processing method is shown in FIG. 3, which is not specifically described in this embodiment.
  • the sink device provided by the embodiment can reduce the direct sound delay of the auxiliary audio data, so that the auxiliary audio data played by the sink device does not allow the user to hear the obvious echo, and effectively protects the audio played by the sink device. Sound quality.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

本发明实施例提供了一种音视频处理方法以及相关设备,方法包括源设备接收宿设备发送的模式信息,模式信息包括第一标识,第一标识用于指示源设备对从外接设备接收的辅音频数据不进行同步处理,源设备通过HDMI接口将未经过同步处理的辅音频数据、同步处理后的主音频数据以及视频数据发送给宿设备。采用本实施例所示的音视频数据处理方法,源设备不会将视频数据与辅音频数据进行同步处理,而是分别将辅音频数据和与视频数据同步的主音频数据发送给宿设备,使得宿设备无需将辅音频数据与视频数据进行同步处理,从而降低了辅音频数据的直达声时延,有效的保障了宿设备所播放的音频的音质。

Description

一种音视频处理方法以及相关设备
本申请要求于2016年3月4日提交中国专利局、申请号为201610125958.2、发明名称为“一种音视频处理方法以及相关设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及多媒体领域,尤其涉及的是一种音视频处理方法以及相关设备。
背景技术
高清晰度多媒体接口(英文全称:High Definition Multimedia Interface,英文简称:HDMI)是一种数字化视频/音频接口技术,HDMI接口可以在一根传输电缆内传送无压缩的音频数据以及高分辨率视频数据,由于HDMI接口支持数字音视频传输,能够给广大用户带来高品质的视听享受,所以HDMI接口技术在消费类电子产品中的应用越来越广泛。
以下结合图1所示对音视频处理系统在卡拉OK场景下如何对音视频数据进行处理的过程进行说明,音视频处理系统包括源设备101和宿设备102,与源设备101连接的麦克风103接收唱者所唱的歌曲“青藏高原”,麦克风103将唱者所唱的歌曲“青藏高原”转换为辅音频数据并发送给源设备101,源设备的混音处理器104接收辅音频数据和主音频数据,其中,主音频数据为用于为视频数据进行伴奏的音频数据,混音处理器104将主音频数据和混音频数据进行混音处理,并将混音处理后的主音频数据和辅音频数据发送给同步处理器109,同步处理器109还用于接收视频数据,同步处理器109用于对视频数据和混音处理后的主音频数据和辅音频数据进行同步处理,并将同步处理后的视频数据、主音频数据和辅音频数据发送给HDMI接口105,源设备101的HDMI接口105将视频数据、主音频数据和辅音频数据发送给宿设备的HDMI接口106,HDMI接口106将视频数据、主音频数据和辅音频数据发送给宿设备102的同步处理器110,同步处理器110再次对接收到的视频数据、主音频数据和辅音频数据进行同步处理,同步处理器110将同步处理后的视频数据发送给显示器108,显示器108用于显示视频数据,同步处理器110还将同步处理后的 主音频数据和辅音频数据发送给扬声器107,扬声器107用于播放主音频数据和辅音频数据。
在音视频处理系统中,直达声时延是指用户听到音频数据的时刻与该音频数据开始传播的时刻的差值,直达声时延大小直接影响音视频处理系统所发出的声音的音质效果。如果直达声时延太大,听者会感觉有明显的回声效果,音质非常糟糕。实验表明,直达声时延小于30毫秒以内,音质效果非常好;直达声时延在30~80毫秒以内,音质效果可接受;直达声时延在80毫秒以上,音质效果变差,听者会有听到回声的感觉,唱者无法实时感受到扬声器所发出的自己的声音。
如图1所示,T0时刻,表示唱者开始传播声音“青藏高原”的时刻,T1时刻,表示唱者声音“青藏高原”经过源设备101和宿设备102的处理,从扬声器107发声后,直接传播到听者的时刻,则直达声时延=T1–T0。
可见,如图1所示的源设备101的同步处理器109和宿设备102的同步处理器110均需要对音频数据、主音频数据和辅音频数据进行同步处理,从而增加了辅音频数据的直达声时延,会导致辅音频数据的直达声时延可能超过100毫秒,从而使得扬声器107所播放的声音有明显的回声,且音质效果差。
发明内容
本发明实施例提供了一种能够有效的降低辅音频数据的直达声时延的音视频处理方法以及相关设备。
本发明实施例第一方面提供了一种音视频处理方法,该方法基于音视频播放系统:
该音视频播放系统包括源设备以及与源设备连接的宿设备。
源设备是指带高清晰度多媒体接口(英文全称:High Definition Multimedia Interface,英文简称:HDMI)的音视频输出设备。
宿设备指示带HDMI接口的音视频接收设备。
源设备以HDMI方式与宿设备连接。
为实现宿设备能够正常的播放音视频数据,则源设备和宿设备均支持HDMI协议。
HDMI2.0和HDMI2.0以上协议支持2路音频流传输,分别是主音频数据和辅音频数据。
本实施例在现有的HDMI2.0和HDMI2.0以上协议上增加卡拉OK模式,具体请参见表1所示:
表1
Figure PCTCN2016105442-appb-000001
其中,本实施例以视频数据为MV视频数据,主音频数据可为用于为MV视频数据进行伴奏的音频数据,辅音频数据可为用户唱歌的音频数据为例进行说明。
HDMI2.0和HDMI2.0以上协议中,当服务类型为正常模式时,源设备需要保持主音频数据、辅音频数据以及视频数据同步,而宿设备也需要保持主音频数据、辅音频数据以及视频数据同步。
本实施例所增加的卡拉OK模式下,源设备需要保持主音频数据与视频数据的同步,而源设备无需保持视频数据与辅音频数据同步,宿设备则需要保持主音频数据与视频数据同步,而宿设备无需保持辅音频数据与视频数据的同步。
本实施例所示的音视频处理方法包括:
源设备以HDMI方式连接宿设备;
具体的,源设备的HDMI接口通过HDMI线缆与宿设备的HDMI接口连接。
源设备确定宿设备支持的服务类型,即源设备确定宿设备支持的服务类型 是正常模式还是卡拉OK模式;
具体的,宿设备将能够指示宿设备所支持的服务类型的模式信息发送给源设备;
更具体的,本实施例所示的模式信息能够指示宿设备支持正常模式,或,该模式信息能够指示宿设备支持卡拉OK模式。
其中,服务类型为正常模式以及服务类型为卡拉OK模式时,源设备与宿设备具体如何对音视频数据进行处理的,请详见表1所示。
可选的,源设备可向宿设备发送用于请求宿设备发送模式信息的消息,宿设备接收到用于请求宿设备发送模式信息的消息时,宿设备即可将模式信息发送给源设备;
若源设备接收到该模式信息,则源设备即可确定宿设备支持的服务类型。
可选的,当源设备与宿设备成功建立HDMI连接时,宿设备即可自动将模式信息发送给源设备。
源设备根据该模式信息即可确定宿设备所能够支持的服务类型,即源设备即可确定宿设备支持正常模式还是支持卡拉OK模式。
本实施例以宿设备通过模式信息通知宿设备支持卡拉OK模式为例进行说明,具体的,宿设备通过模式信息所包括的第一标识指示第一处理模式,其中,该第一处理模式即为表1所示的卡拉OK模式。
该音视频处理方法还包括:
该源设备通过外接设备接收辅音频数据;
其中,该外接设备为与该源设备连接的设备;
在具体应用场景中,外接设备可为麦克风设备。
源设备在根据模式信息确定宿设备支持卡拉OK模式的情况下,源设备即可只需保持视频数据和主音频数据同步,而无需对辅音频数据进行同步处理。
该源设备通过HDMI接口将未经过同步处理的辅音频数据、同步处理后的该主音频数据以及该视频数据发送给该宿设备。
宿设备对接收到的主音频数据以及该视频数据进行同步处理,而且宿设备无需对辅音频数据与视频数据进行同步处理;
宿设备能够显示视频数据,且宿设备还能够播放辅音频数据以及与视频数据同步的主音频数据。
可见,采用本实施例所示的音视频数据处理方法,因源设备以及宿设备支持卡拉OK模式,则源设备以及宿设备不会对辅音频数据与视频数据进行同步处理,从而降低了辅音频数据的直达声时延,从而使得宿设备所播放的辅音频数据不会让用户听见明显的回声,有效的保障了宿设备所播放的音频的音质。
结合本发明实施例第一方面,本发明实施例第一方面的第一种实现方式中,
该模式信息为辅助音频类型列表;
以下首先结合表2对服务类型为正常模式下时,该辅助音频类型列表所包含的具体内容进行说明:
表2
Figure PCTCN2016105442-appb-000002
其中,表2所示的辅助音频类型列表包括了标识和音视频数据处理方式的对应关系。
表2所示的标识为3比特的二进制数据。
具体的,表2所示的辅助音频类型列表中的第二标识“000”、“001”、“010”、“011”与“100”所对应的音视频数据处理方式为现有HDMI 2.0和HDMI 2.0以上协议所规定的,即源设备向宿设备发送两路音频数据,一路是主音频数据,另一路是辅音频数据,其中,源设备和宿设备需要保持视频数据、主音频数据与辅音频数据的同步。
以下首先结合表3对服务类型为正常模式和卡拉OK模式下时,该辅助音频类型列表所包含的具体内容进行说明:
表3
Figure PCTCN2016105442-appb-000003
本实施例所示的辅助音频类型列表中的第二标识“000”、“001”、“010”、“011”与“100”所对应的音视频数据处理方式与表2所示保持不变。
具体的,本实施例在表2所示的辅助音频类型列表标识预留字段中增加第一标识“101”以及在音视频数据处理方式的保留扩展字段中增加与第一标识对应的第一处理模式,第一处理模式为不对辅音频数据进行同步处理。
宿设备通过第一处理模式通知源设备不对辅音频数据进行同步处理,即宿设备通过第一处理模式指示支持卡拉OK模式。
该源设备通过该HDMI接口接收该宿设备发送的该辅助音频类型列表。
本实施例源设备通过辅助音频类型列表即可确定宿设备支持卡拉OK模式,则源设备即可无需保持辅音频数据与视频数据的同步,从而有效的减少了在源设备测保持辅音频数据与视频数据同步时所带来的直达声时延,即有效的减少了辅音频数据在源设备测的直达声时延。
源设备接收到该辅助音频类型列表后,该源设备需要判断源设备以及宿设备是否支持第一处理模式;
具体的,若源设备确定辅助音频类型列表中包括第一标识,则源设备即可确定宿设备支持第一处理模式;
具体的,源设备预先存储有能力集合,该能力集合中存储有源设备能够对视频数据、辅音频数据以及主音频数据如何进行处理的处理模式;
更具体的,若源设备确定该能力集合中存储有第一标识,则源设备即可确定源设备支持第一处理模式,若源设备确定该能力集合中没有存储有第一标识,则源设备即可确定源设备不支持第一处理模式。
结合本发明实施例第一方面或本发明实施例第一方面的第一种实现方式,本发明实施例第一方面的第二种实现方式中,
若该源设备确定该源设备支持该第一处理模式,则该源设备通过该HDMI接口将响应信息发送给该宿设备,宿设备通过该响应信息即可确定宿设备对从源设备接收到的辅音频数据不进行同步处理。
可选的,响应信息可为与第一处理模式对应的第一标识;
可选的,响应信息可为包含有目标内容的信息,该目标内容为指示宿设备对从源设备接收到的辅音频数据不进行同步处理。
本实施例中,源设备通过响应信息指示源设备支持卡拉OK模式,则宿设备即可执行卡拉OK模式,即宿设备能够无需保持辅音频数据与音频数据的同步,进而有效的减少了辅音频数据在宿设备侧的直达声时延。
结合本发明实施例第一方面至本发明实施例第一方面的第二种实现方式任一项所述的方法,本发明实施例第一方面的第三种实现方式中,
若源设备不支持对辅音频数据不进行同步处理,则源设备对视频数据、主音频数据和辅音频数据进行同步处理;
源设备通过HDMI接口将经过同步处理的辅音频数据、主音频数据以及视频数据发送给宿设备。
本发明实施例第二方面提供了一种音视频数据处理方法,该方法包括:
宿设备将已配置的模式信息通过高清晰度多媒体接口HDMI发送给源设 备;
该模式信息具体请详见表1所示。
该宿设备接收该源设备通过该HDMI接口发送的未经过同步处理的辅音频数据、经过同步处理的主音频数据以及视频数据;
本实施例中的宿设备能够支持服务类型为卡拉OK模式的情况,则该宿设备对该视频数据和该主音频数据进行同步处理,而且宿设备无需保持辅音频数据与视频数据的同步。
该宿设备显示该视频数据;
具体的,宿设备通过宿设备的显示器显示视频数据。
该宿设备播放该辅音频数据和与该视频数据同步的主音频数据。
具体的,宿设备通过宿设备的扬声器播放辅音频数据和与该视频数据同步的主音频数据。
本实施例中,因宿设备支持卡拉OK模式,则源设备以及宿设备不会对辅音频数据与视频数据进行同步处理,从而降低了辅音频数据的直达声时延,从而使得宿设备所播放的辅音频数据不会让用户听见明显的回声,有效的保障了宿设备所播放的音频的音质。
结合本发明实施例的第二方面,本发明实施例第二方面的第一种实现方式中,
宿设备通过HDMI接口中预先存储的能力集合确定是否支持对辅音频数据不进行同步处理,能力集合包括支持对辅音频数据不进行同步处理或支持对辅音频数据进行同步处理;
具体的,若宿设备的能力集合中预先存储有支持对辅音频数据不进行同步处理,则宿设备即可确定宿设备能够支持对辅音频数据不进行同步处理。
若宿设备的能力集合中预先存储有不支持对辅音频数据不进行同步处理,则宿设备即可确定宿设备不能够支持对辅音频数据不进行同步处理。
若宿设备通过HDMI接口中预先存储的能力集合确定支持对辅音频数据不进行同步处理,则宿设备将包括有第一标识的模式信息发送给源设备。
宿设备通过该第一标识通知源设备,宿设备支持卡拉OK模式。
结合本发明实施例的第二方面或本发明实施例第二方面的第一种实现方式,本发明实施例第二方面的第二种实现方式中,
该宿设备通过该HDMI接口接收该源设备发送的该响应信息;
该宿设备根据该响应信息确定源设备支持卡拉OK模式。
本实施例中,源设备通过响应信息指示源设备支持卡拉OK模式,则宿设备即可执行卡拉OK模式,即宿设备能够无需保持辅音频数据与音频数据的同步,进而有效的减少了辅音频数据在宿设备侧的直达声时延。
结合本发明实施例的第二方面至本发明实施例第二方面的第二种实现方式中任一项所述的方法,本发明实施例第二方面的第三种实现方式中,
该宿设备配置该模式信息,该模式信息为辅助音频类型列表,该辅助音频类型列表具体请详见表3所示。
本实施例源设备通过辅助音频类型列表即可通知源设备,宿设备支持卡拉OK模式,则源设备即可无需保持辅音频数据与视频数据的同步,从而有效的减少了在源设备测保持辅音频数据与视频数据同步时所带来的直达声时延,即有效的减少了辅音频数据在源设备测的直达声时延。
本发明实施例第三方面提供了一种源设备,包括高清晰度多媒体接口HDMI、处理器以及存储器;
HDMI接口,用于接收宿设备发送的模式信息,模式信息包括第一标识,第一标识用于指示源设备对从外接设备接收的辅音频数据不进行同步处理;
处理器,用于判断源设备自身是否支持对辅音频数据不进行同步处理;
存储器,用于存储视频数据和主音频数据;
处理器还用于,若源设备支持对辅音频数据不进行同步处理,则处理器对视频数据和主音频数据进行同步处理;
HDMI接口还用于,将未经过同步处理的辅音频数据和经过同步处理的主音频数据和视频数据发送给宿设备。
结合本发明实施例第三方面,本发明实施例第三方面的第一种实现方式中,
HDMI接口还用于,预先存储有能力集合,能力集合包括支持对辅音频数据不进行同步处理或支持对辅音频数据进行同步处理;
处理器还用于,通过HDMI接口中预先存储的能力集合确定是否支持对辅音频数据不进行同步处理。
结合本发明实施例第三方面或本发明实施例第三方面的第一种实现方式,本发明实施例第三方面的第二种实现方式中,HDMI接口还用于,若源设备支持对辅音频数据不进行同步处理,则向宿设备发送响应信息,响应信息用于指示宿设备对从源设备接收到的辅音频数据不进行同步处理。
结合本发明实施例第三方面至本发明实施例第三方面的第二种实现方式任一项的源设备,本发明实施例第三方面的第三种实现方式中,
模式信息还包括第二标识,第二标识用于指示源设备对从外接设备接收的辅音频数据进行同步处理;
处理器还用于,若源设备不支持对辅音频数据不进行同步处理,则对视频数据、主音频数据和辅音频数据进行同步处理;
HDMI接口还用于,将经过同步处理的辅音频数据、主音频数据以及视频数据发送给宿设备。
本发明实施例第四方面提供了一种宿设备,包括高清晰度多媒体接口HDMI、处理器以及播放器;
HDMI接口,用于向源设备发送模式信息,模式信息包括第一标识,第一标识用于指示源设备对从外接设备接收的辅音频数据不进行同步处理;
HDMI接口还用于,接收源设备发送的未经过同步处理的辅音频数据和经过同步处理的主音频数据和视频数据;
处理器,用于对主音频数据和视频数据进行同步处理;
播放器,用于播放未经同步处理的辅音频数据和经过同步处理的主音频数据和视频数据。
结合本发明实施例第四方面,本发明实施例第四方面的第一种实现方式中,HDMI接口还用于,预先存储有能力集合,能力集合包括支持对辅音频数据不进行同步处理或支持对辅音频数据进行同步处理;
处理器还用于,通过HDMI接口中预先存储的能力集合确定是否支持对辅音频数据不进行同步处理;
HDMI接口还用于,若处理器通过能力集合确定支持对辅音频数据不进行同步处理,则将包括有第一标识的模式信息发送给源设备。
结合本发明实施例第四方面或本发明实施例第四方面的第一种实现方式,本发明实施例第四方面的第二种实现方式中,HDMI接口还用于,接收源设备发送的响应信息,响应信息用于指示宿设备对从源设备接收到的辅音频数据不进行同步处理。
结合本发明实施例第四方面至本发明实施例第四方面的第二种实现方式任一项所述的宿设备,本发明实施例第四方面的第三种实现方式中,
处理器还用于,配置模式信息,模式信息还包括第二标识,第二标识用于指示源设备对从外接设备接收的辅音频数据进行同步处理;
HDMI接口还用于,若源设备不支持对辅音频数据不进行同步处理,则接收源设备通过HDMI接口发送的经过同步处理的辅音频数据、主音频数据以及视频数据;
处理器还用于,对视频数据、主音频数据和辅音频数据进行同步处理;
播放器还用于,播放经过同步处理的辅音频数据、主音频数据和视频数据。
本发明实施例提供了一种音视频数据处理方法、相关设备以及播放系统,该音视频数据处理方法包括源设备接收宿设备发送的模式信息,该模式信息包括第一标识,第一标识用于指示源设备对从外接设备接收的辅音频数据不进行同步处理,该源设备通过外接设备接收辅音频数据,该源设备对视频数据和主音频数据进行同步处理,该源设备通过该HDMI接口将未经同步处理的辅音频数据、经过同步处理的该主音频数据以及该视频数据发送给该宿设备。采用本实施例所示的音视频数据处理方法,该源设备不会将视频数据与辅音频数据进行同步处理,而是分别将该辅音频数据和与该视频数据同步的该主音频数据发送给该宿设备,使得该宿设备无需将该辅音频数据与该视频数据进行同步处理,从而降低了该辅音频数据的直达声时延,从而使得该宿设备所播放的该辅音频数据不会让用户听见明显的回声,有效的保障了该宿设备所播放的音频的音质。
附图说明
图1为现有技术所示的音视频数据通过音视频数据处理系统进行传播的示意图;
图2为本发明所提供的音视频数据播放系统的一种实施例结构示意图;
图3为本发明所提供的音视频数据处理方法的一种实施例步骤流程图;
图4为本发明所提供的源设备的一种实施例结构示意图;
图5为本发明所提供的宿设备的一种实施例结构示意图。
具体实施方式
以下结合图2所示对本发明实施例所提供的音视频数据播放系统的具体结构进行说明:
本实施例所示的音视频数据播放系统包括源设备以及与源设备连接的宿设备。
源设备是指带高清晰度多媒体接口(英文全称:High Definition Multimedia Interface,英文简称:HDMI)的音视频数据输出设备。
例如,源设备可为DVD播放器、机顶盒STB、蓝光DVD。
宿设备为带HDMI接口的音视频数据接收设备。
例如,宿设备可为支持HDMI接口的电视机。
源设备通过HDMI线缆与宿设备连接。
具体的,本实施例所示的源设备的HDMI接口与宿设备的HDMI接口通过HDMI线缆连接。
采用HDMI接口进行源设备和宿设备连接的优势在于,源设备可同时将音频数据和视频数据传输给宿设备。
其中,HDMI接口是一种数字化视频数据/音频接口技术,是适合影像传输的专用型数字化接口,其可同时传送音频和视频数据,最高数据传输速度为5Gbps。同时无需在数据传送前进行数/模或者模/数转换。
HDMI接口可搭配宽带数字内容保护(英文全称:High-bandwidth Digital Content Protection,英文简称:HDCP),以防止具有著作权的影音内容遭到未经授权的复制。
在HDMI接口的音频数据和视频数据采用的是最小化传输差分数据(英文全称:Transition-minimized differential signaling,英文简称:TMDS)进行传输。
最小化传输差分数据在HDMI接口具体传输过程请详见现有技术所示,具体在本实施例中不做赘述。
以下结合图2和图3所示对本实施例所提供的音视频数据处理方法的具体过程进行详细说明:
步骤301、源设备通过外接设备接收辅音频数据。
如图2所示,本实施例所示的音视频数据播放系统还包括外接设备201。
外接设备201为与源设备连接的且用于接收音频信号的设备。
在具体应用场景中,外接设备201可为麦克风设备,电子游乐器等。
电子游乐器可为吉他、钢琴等游乐器、射击游乐器等。
本实施例对外接设备201不做限定,只要外接设备201能够接收到用户输入的音频信号即可。
例如,若外接设备201为麦克风设备,则用户通过麦克风设备输入音频信号,麦克风设备能够因已输入的音频信号进行处理以形成辅音频数据。
其中,用户通过麦克风输入的音频信号可为说话的声音数据或者为唱歌的声音。
还例如,若外接设备201为电子游乐器,则用户通过电子游乐器输入的音频信号可为操作电子游乐器的声音。
为更好的理解本发明实施例,本发明实施例以音视频数据处理方法应用至卡拉OK场景下为例,即以外接设备为麦克风设备,用户通过麦克风设备输入唱歌的音频信号,麦克风设备将音频信号转换为辅音频数据,并将已转换的音频数据发送给源设备为例。
其中,麦克风设备具体如何将音频信号转换为辅音频数据的请详见现有技术所示,具体在本实施例中不做赘述。
需明确的是,本实施例所示的应用场景仅仅为可选的示例,不做限定。
步骤302、源设备对辅音频数据进行处理。
具体的,源设备的辅音频数据处理器207接收外接设备201发送的辅音频数据。
更具体的,辅音频数据处理器207对辅音频数据进行处理。
本实施例中,在不同的应用场景下辅音频数据处理器207能够对辅音频数据进行不同的处理,具体的处理方式在本实施例中不做限定。
例如,若外接设备为麦克风设备,则辅音频数据处理器207能够对辅音频数据进行卡拉OK音效处理。
更具体的,辅音频数据处理器207将处理后的辅音频数据发送给源设备的HDMI接口205。
步骤303、宿设备配置模式信息。
具体的,宿设备的HDMI接口208配置模式信息,该模式信息用于指示源设备对从外接设备接收的辅音频数据进是否行同步处理。
更具体的,模式信息为辅助音频类型列表。
其中,宿设备通过辅助音频类型列表通知源设备音视频数据的应用场景,以使源设备根据辅助音频类型列表确定如何对音视频数据进行处理。
以下对辅助音频类型列表进行说明:
本实施例所示的辅助音频类型列表为在现有技术所示的辅助音频类型列表的基础上进行的改进,以使通过本实施例改进后的辅助音频类型列表能够使得源设备确定宿设备是否支持不对辅音频数据进行同步处理的应用场景。
以下首先结合表2对现有技术所示的辅助音频类型列表进行说明:
表2
Figure PCTCN2016105442-appb-000004
其中,现有技术所示的辅助音频类型列表包括了标识和音视频数据处理方式的对应关系。
现有技术所示的标识为3比特的二进制数据。
具体的,辅助音频类型列表包括第二标识和第二处理模式的对应关系。
其中,第二处理模式为对辅音频数据进行同步处理。
更具体的,现有技术所示的辅助音频类型列表中的第二标识“000”、“001”、“010”、“011”与“100”所对应的第二处理模式为现有HDMI 2.X协议所规定的,具体请详见现有HDMI 2.X协议所示,在本实施例中不做赘述。
具体的,本实施例所示的HDMI 2.X协议为HDMI 2.0或HDMI 2.0以上的协议所规定。
更具体的,X为大于或等于0的自然数。
可选的,将现有技术所提供的辅助音频类型列表中所包含的标识确定为第二标识,将现有技术所示的与第二标识对应的音视频数据处理方式确定为第二处理模式,即现有技术所示的辅助音频类型列表包括了第二标识和第二处理模式的对应关系。
对现有技术所提供的第二处理模式在本实施例不做赘述,可选的,第二处理模式为对辅音频数据进行同步处理。
本实施例中,宿设备在配置辅助音频类型列表时,在现有的HDMI 2.X协 议所规定的配置辅助音频类型列表的标识预留字段中增加新的标识,在音视频数据处理方式保留扩展字段中增加与新的标识对应的音视频数据处理方式,以下结合表3所示对本实施例所提供的辅助音频类型列表进行说明:
表3
Figure PCTCN2016105442-appb-000005
本实施例所示的辅助音频类型列表中的第二标识“000”、“001”、“010”、“011”与“100”所对应的音视频数据处理方式与现有HDMI 2.X协议所规定的保持不变,即本实施例所示的辅助音频类型列表对现有的HDMI 2.X协议已规定的音视频数据处理方式不作更改。
具体的,本实施例在HDMI 2.x协议中的辅助音频类型列表标识预留字段中增加第一标识“101”以及在音视频数据处理方式的保留扩展字段中增加与第一标识对应的第一处理模式,第一处理模式为对辅音频数据不进行同步处理。
需明确的是,本实施例对第一标识不做限定,只要本实施例新增的第一标识与现有的HDMI 2.X协议已规定的第二标识不一致即可。
宿设备通过第一处理标识通知源设备不对辅音频数据进行同步处理。
需明确的是,本实施例中的步骤303与步骤301至步骤302之间并无执行时序上的先后关系。
步骤304、宿设备将辅助音频类型列表发送给源设备。
宿设备的HDMI接口208通过HDMI线缆将辅助音频类型列表发送给源设备的HDMI接口205。
具体的,宿设备需要通过辅助音频类型列表通知源设备,宿设备是否支持对辅音频数据不进行同步处理。
更具体的,本实施例所示的宿设备需要确定宿设备是否支持对辅音频数据不进行同步处理。
可选的,宿设备的HDMI接口208可预先存储有能力集合。
宿设备的能力集合中存储有宿设备能够对音频数据、辅音频数据以及主音频数据如何进行处理的处理模式。
若宿设备确定宿设备的能力集合中包括对辅音频数据不进行同步处理的处理模式,则宿设备可将包括有第一标识和第一处理模式的辅助音频类型列表发送给源设备。
若宿设备确定宿设备的能力集合中包括对辅音频数据进行同步处理的处理模式,则宿设备可将包括有第二标识和第二处理模式的辅助音频类型列表发送给源设备。
步骤305、源设备接收辅助音频类型列表。
具体的,源设备的HDMI接口205接收宿设备的HDMI接口208通过HDMI线缆所发送的辅助音频类型列表。
步骤306、源设备判断辅助音频类型列表中是否包括第一标识,若是,则执行步骤307,若否,则执行步骤308。
具体的,在源设备接收到辅助音频类型列表后,源设备判断宿设备是否支持第一处理模式。
具体的,若源设备确定辅助音频类型列表中包括第一标识,则源设备确定宿设备支持第一处理模式,若源设备确定辅助音频类型列表中不包括第一标识,则源设备确定宿设备支持第二处理模式。
第一处理模式和第二处理模式的具体说明请详见上述所示,具体在本处不再赘述。
步骤307、源设备判断自身是否支持对辅音频数据不进行同步处理,若否,则执行步骤308,若是,则执行步骤311。
具体的,源设备的HDMI接口205预先存储有能力集合。
源设备的能力集合中存储有源设备能够对音频数据、辅音频数据以及主音频数据如何进行处理的处理模式。
可选的,该能力集合中可存储有第一处理模式和/或第二处理模式,其中,第一处理模式以及第二处理模式的具体说明请详见上述所示,具体在本处不再赘述。
还需明确的是,源设备的HDMI接口如何建立能力集合为现有技术,具体在本实施例中不做赘述。
更具体的,若源设备确定能力集合中包括第一处理模式,则源设备即可确定源设备支持第一处理模式,若源设备确定能力集合中不包括有第一处理模式,则源设备即可确定源设备不支持第一处理模式。
具体的,在源设备执行步骤306以确定出辅助音频类型列表中不包括第一标识,和/或,在源设备执行步骤307以确定出源设备自身不支持对辅音频数据不进行同步处理,则执行步骤308。
步骤308、源设备按第二处理模式对音视频数据进行处理。
本实施例中,第二处理模式为现有的HDMI 2.X协议已规定的音视频数据处理方式,具体在本实施例中不做赘述。
具体的,在源设备确定宿设备和/或源设备自身不支持第一处理模式的情况下,源设备即可按第二处理模式对音视频数据进行处理。
例如,源设备确定宿设备所发送的辅助音频类型列表中只包括第二标识,则源设备即可按第二处理模式对音视频数据进行处理。
还例如,源设备确定宿设备所发送的辅助音频类型列表中包括第一标识和第二标识,但源设备的能力集合中只包括有第二处理模式,则源设备即可按第二处理模式对音视频数据进行处理。
步骤309、源设备通过HDMI接口将经过同步处理的辅音频数据、主音频数据和视频数据发送给宿设备。
具体的,源设备确定源设备自身不支持对从外接设备接收的辅音频数据不进行同步处理,和/或源设备确定宿设备不支持对辅音频数据不进行同步处理的情况下,源设备即可将按第二处理模式处理后的,即同步的辅音频数据、主音频数据和视频数据发送给宿设备。
步骤310、宿设备接收源设备发送的经过同步处理的辅音频数据、主音频数据和视频数据发送给宿设备。
本实施例中宿设备的HDMI接口208能够接收HDMI线缆发送过来的HDMI信号,且宿设备的HDMI接口208能够获取HDMI信号所包含的辅音频数据、主音频数据以及视频数据。
步骤311、宿设备按第二处理模式对辅音频数据、主音频数据和视频数据进行处理。
具体的,宿设备按现有技术所示对辅音频数据、主音频数据和视频数据进行同步处理,具体在本实施例中不做赘述。
步骤312、源设备按第一处理模式对音视频数据进行处理。
具体的,源设备可对视频数据和主音频数据进行同步处理。
具体的,如图2所示,源设备的存储器203中存储有视频数据以及主音频数据。
其中,在本实施例卡拉OK场景下,视频数据为MV视频数据,主音频数据可为用于为MV视频数据进行伴奏的音频数据。
源设备的视频数据解码器202从存储器203中获取视频数据,并对已获取到的视频数据进行解码。
视频数据解码器202将已解码的视频数据发送给同步处理器204。
源设备的主音频数据解码器205从存储器203中获取主音频数据,并对已获取到的主音频数据进行解码。
主音频数据解码器205将已解码的视频数据发送给同步处理器204。
同步处理器204对视频数据与主音频数据进行同步处理,以使视频数据与主音频数据同步。
本实施例对同步处理器204具体如何对视频数据与主音频数据进行同步 处理的不做限定,只要同步处理后的视频数据与主音频数据同步即可。
例如,同步处理器204对主音频数据进行时延缓冲处理,以使视频数据与主音频数据同步。
同步处理器204将经过同步处理的视频数据和主音频数据发送给源设备的HDMI接口205。
步骤313、源设备通过HDMI接口将未经过同步处理的辅音频数据和经过同步处理的主音频数据和视频数据发送给宿设备。
在源设备确定宿设备以及源设备能够支持第一处理模式的情况下,源设备即可不将辅音频数据与视频数据进行同步处理,并将未经过同步处理的辅音频数据和经过同步处理后的主音频数据和视频数据发送给宿设备。
具体的,本实施例所示的源设备的HDMI接口205能够将辅音频数据、主音频数据以及视频数据以HDMI信号的形式通过HDMI线缆发送给宿设备的HDMI接口208。
因本实施例所示的源设备不将辅音频数据与视频数据进行同步处理,从而减少了辅音频数据在源设备侧的直达声时延。
步骤314、源设备向宿设备发送响应信息。
响应信息用于指示宿设备对从源设备接收到的辅音频数据不进行同步处理。
本实施例中,源设备可通过两种方式将响应信息发送给宿设备。
可选的,一种为:在源设备确定宿设备和源设备均支持第一处理模式的情况下,源设备即可查询表3所示的辅助音频类型列表,进而源设备即可确定与第一处理模式对应的第一标识。
本种情况下,该第一标识即为响应信息,源设备的HDMI接口205通过HDMI线缆将第一标识发送给宿设备的HDMI接口208。
可选的,另一种为:在源设备确定宿设备和源设备均支持第一处理模式的情况下,则源设备即可生成包括有目标内容的响应信息,该目标内容为指示宿设备对从源设备接收的辅音频数据不进行同步处理,则使得宿设备在接收到包括目标内容的响应信息时,宿设备即可确定对从源设备接收到的辅音频数据不 进行同步处理。
步骤315、宿设备接收源设备发送的响应信息。
响应信息的具体说明请详见步骤312所示,具体在本步骤中不做赘述。
步骤316、宿设备根据响应信息确定源设备支持第一处理模式。
具体的,若响应信息为第一标识,则宿设备即可根据辅助音频类型列表即可确定与第一标识对应的第一处理模式,宿设备即可确定源设备支持第一处理模式。
具体的,若响应信息为包含有目标内容的信息,则宿设备即可根据响应信息所包含的目标内容确定对从源设备接收到的辅音频数据不进行同步处理。
具体的,HDMI接口208能够将视频数据发送给视频数据处理器209。
本实施例对视频数据处理器209具体如何对视频数据进行处理不做限定,在不同的应用场景下,视频数据处理器209能够对视频数据进行不同的处理。
例如,视频数据处理器209能够对视频数据进行美化处理等。
视频数据处理器209能够将已处理的视频数据发送给宿设备的同步处理器211。
具体的,HDMI接口208能够将主音频数据发送给音频处理器210。
本实施例对音频处理器210具体如何对主音频数据进行处理不做限定,在不同的应用场景下,音频处理器210能够对主音频数据进行不同的处理。
例如,音频处理器210能够对主音频数据进行音量的大小调节等。
音频处理器210能够将已处理的主音频数据发送给宿设备的同步处理器211。
在本实施例中,宿设备在确定宿设备和源设备均支持不对辅音频数据进行同步处理的情况下,宿设备的同步处理器211只需要对主音频数据和视频数据进行同步处理即可。
具体的,同步处理器211用于对视频数据和主音频数据进行同步处理,以使同步处理后的视频数据与主音频数据同步。
更具体的,同步处理器211将已同步处理完成的主音频数据发送给混音器212。
步骤317、宿设备的混音器接收主音频数据和辅音频数据。
具体的,宿设备的HDMI接口将辅音频数据发送给混音器。
具体的,本实施例中,宿设备的HDMI接口208接收到辅音频数据时,HDMI接口208不对辅音频数据进行处理,即HDMI接口208直接将辅音频数据发送给混音器212。
具体的,宿设备的同步处理器211将进行同步处理后的主音频数据发送给混音器。
本实施例中,宿设备无需将视频数据与辅音频数据进行同步处理,从而有效的减少了辅音频数据在宿设备中的直达声时延。
步骤318、宿设备的混音器将主音频数据和辅音频数据发送给扬声器。
本实施例中,混音器212能够对接收到的主音频数据和混音频数据进行混音处理,并将混音处理后的主音频数据和辅音频数据发送给扬声器214。
步骤319、宿设备显示视频数据。
具体的,同步处理器211将处理完成的视频数据发送给显示器213,从而使得显示器213能够显示视频数据。
在本实施例卡拉OK的场景下,显示器213显示的是MV视频数据。
具体的,混音器212将混音处理后的音频数据发送给扬声器214。
在本实施例卡拉OK的场景下,扬声器214能够播放为MV视频数据伴奏且与MV音频数据同步的主音频数据,且扬声器214还能够播放外接设备201所发送的辅音频数据。
进一步参见图2所示,本实施例所示的辅音频数据的直达声时延为辅音频数据在源设备侧进行处理的时延T0、辅音频数据由源设备的HDMI接口205发送至宿设备的HDMI接口208的时延T1以及辅音频数据在宿设备侧进行混音处理的时延T2的和。
即辅音频数据的直达声时延=T0+T1+T2。
在本实施例所示的卡拉OK场景下,T0=15ms,T1=5ms,T2=10ms,继而辅音频数据的直达声时延为30ms。
需明确的是,本实施例对T0、T1以及T2的取值为可能的示例,不做限 定,在具体应用中,因设备的不同,使用环境的不同等情况,T0、T1以及T2的取值也可不同。
可见,采用本实施例所示的音视频数据处理方法,因源设备以及宿设备不会对辅音频数据与视频数据进行同步处理,从而降低了辅音频数据的直达声时延,从而使得宿设备所播放的辅音频数据不会让用户听见明显的回声,有效的保障了宿设备所播放的音频的音质。
图3所示的实施例说明了本发明实施例所提供的音视频处理方法,以下结合图4所示的实施例说明本发明实施例所提供的一种源设备,其中,图4所提供的源设备能够支持图3所示的音视频处理方法。
具体的,源设备包括高清晰度多媒体接口HDMI401、处理器402以及存储器403。
具体的,该源设备可因配置或性能不同而产生比较大的差异,可以包括一个或一个以上的处理器402。
存储器403可以是短暂存储或持久存储。
更具体的,所述处理器402分别与所述存储器403以及所述高清晰度多媒体接口HDMI401通过总线连接。
其中,HDMI接口401,用于接收宿设备发送的模式信息,模式信息包括第一标识,第一标识用于指示源设备对从外接设备接收的辅音频数据不进行同步处理;
处理器402,用于判断源设备自身是否支持对辅音频数据不进行同步处理;
存储器403,用于存储视频数据和主音频数据;
处理器402还用于,若源设备支持对辅音频数据不进行同步处理,则处理器对视频数据和主音频数据进行同步处理;
HDMI接口401还用于,将未经过同步处理的辅音频数据和经过同步处理的主音频数据和视频数据发送给宿设备。
可选的,HDMI接口401还用于,预先存储有能力集合,能力集合包括支持对辅音频数据不进行同步处理或支持对辅音频数据进行同步处理;
处理器402还用于,通过所述HDMI接口中预先存储的能力集合确定是 否支持对所述辅音频数据不进行同步处理。
可选的,HDMI接口401还用于,若源设备支持对辅音频数据不进行同步处理,则向宿设备发送响应信息,响应信息用于指示宿设备对从源设备接收到的辅音频数据不进行同步处理。
可选的,处理器402还用于,若源设备不支持对辅音频数据不进行同步处理,则对视频数据、主音频数据和辅音频数据进行同步处理;
HDMI接口401还用于,将同步处理后的辅音频数据、主音频数据以及视频数据发送给宿设备。
本实施例所示的源设备能够实现图3所示的音视频数据处理方法,具体实现音视频处理方法的过程请详见图3所示,具体在本实施例中不做赘述。
通过本实施例所提供的源设备能够降低辅音频数据的直达声时延,从而使得宿设备所播放的辅音频数据不会让用户听见明显的回声,有效的保障了宿设备所播放的音频的音质。
图3所示的实施例说明了本发明实施例所提供的音视频处理方法,以下结合图5所示的实施例说明本发明实施例所提供的一种宿设备,其中,图5所提供的宿设备能够支持图3所示的音视频处理方法。
具体的,源设备包括高清晰度多媒体接口HDMI501、处理器502以及播放器503。
具体的,该宿设备可因配置或性能不同而产生比较大的差异,可以包括一个或一个以上的处理器502。
播放器503可为用于播放视频数据的显示屏,以及用于播放主音频数据和辅音频数据的扬声器。
具体的,处理器502分别通过总线与高清晰度多媒体接口HDMI501以及播放器503连接。
HDMI接口501,用于向源设备发送模式信息,模式信息包括第一标识,第一标识用于指示源设备对从外接设备接收的辅音频数据不进行同步处理;
HDMI接口501还用于,接收源设备发送的未经过同步处理的辅音频数据和经过同步处理的主音频数据和视频数据;
处理器502,用于对主音频数据和视频数据进行同步处理;
播放器503,用于播放未经同步处理的辅音频数据和经过同步处理的主音频数据和视频数据。
可选的,HDMI接口501还用于,预先存储有能力集合,能力集合包括支持对辅音频数据不进行同步处理或支持对辅音频数据进行同步处理;
处理器502还用于,通过HDMI接口中预先存储的能力集合确定是否支持对辅音频数据不进行同步处理;
HDMI接口501还用于,若处理器通过能力集合确定支持对辅音频数据不进行同步处理,则将包括有第一标识的模式信息发送给源设备。
可选的,HDMI接口501还用于,接收源设备发送的响应信息,响应信息用于指示宿设备对从源设备接收到的辅音频数据不进行同步处理。
可选的,处理器502还用于,配置模式信息,模式信息还包括第二标识,第二标识用于指示源设备对从外接设备接收的辅音频数据进行同步处理;
HDMI接口501还用于,若源设备不支持对辅音频数据不进行同步处理,则接收源设备通过HDMI接口发送的同步处理后的辅音频数据、主音频数据以及视频数据;
处理器502还用于,对视频数据、主音频数据和辅音频数据进行同步处理;
播放器503还用于,播放经过同步处理的辅音频数据、主音频数据和视频数据。
本实施例所示的宿设备能够实现图3所示的音视频数据处理方法,具体实现音视频处理方法的过程请详见图3所示,具体在本实施例中不做赘述。
通过本实施例所提供的宿设备能够降低辅音频数据的直达声时延,从而使得宿设备所播放的辅音频数据不会让用户听见明显的回声,有效的保障了宿设备所播放的音频的音质。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的设备的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
以上,以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参 照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。

Claims (16)

  1. 一种音视频处理方法,其特征在于,所述方法包括:
    源设备通过高清晰度多媒体接口HDMI接收宿设备发送的模式信息,所述模式信息包括第一标识,所述第一标识用于指示所述源设备对从外接设备接收的辅音频数据不进行同步处理;
    所述源设备判断自身是否支持对所述辅音频数据不进行同步处理;
    若所述源设备支持对所述辅音频数据不进行同步处理,则所述源设备对视频数据和主音频数据进行同步处理;
    所述源设备通过所述HDMI接口将未经过同步处理的所述辅音频数据和经过同步处理的所述主音频数据和所述视频数据发送给所述宿设备。
  2. 根据权利要求1所述的方法,其特征在于,所述源设备判断自身是否支持对所述辅音频数据不进行同步处理包括:
    所述源设备通过所述HDMI接口中预先存储的能力集合确定是否支持对所述辅音频数据不进行同步处理,所述能力集合包括支持对辅音频数据不进行同步处理或支持对辅音频数据进行同步处理。
  3. 根据权利要求1或2所述的方法,其特征在于,若所述源设备支持对所述辅音频数据不进行同步处理,所述方法还包括:
    所述源设备向所述宿设备发送响应信息,所述响应信息用于指示所述宿设备对从所述源设备接收到的所述辅音频数据不进行同步处理。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,所述模式信息还包括第二标识,所述第二标识用于指示所述源设备对从所述外接设备接收的辅音频数据进行同步处理,所述方法还包括:
    若所述源设备不支持对所述辅音频数据不进行同步处理,则所述源设备对所述视频数据、所述主音频数据和所述辅音频数据进行同步处理;
    所述源设备通过所述HDMI接口将经过同步处理的所述辅音频数据、所述主音频数据以及所述视频数据发送给所述宿设备。
  5. 一种音视频处理方法,其特征在于,包括:
    宿设备通过高清晰度多媒体接口HDMI向源设备发送模式信息,所述模 式信息包括第一标识,所述第一标识用于指示所述源设备对从外接设备接收的辅音频数据不进行同步处理;
    所述宿设备接收所述源设备通过所述HDMI接口发送的未经过同步处理的所述辅音频数据和经过同步处理的主音频数据和视频数据;
    所述宿设备对所述主音频数据和所述视频数据进行同步处理;
    所述宿设备播放未经同步处理的所述辅音频数据和经过同步处理的所述主音频数据和所述视频数据。
  6. 根据权利要求5所述的方法,其特征在于,所述宿设备通过高清晰度多媒体接口HDMI向源设备发送模式信息之前,所述方法还包括:
    所述宿设备通过所述HDMI接口中预先存储的能力集合确定是否支持对所述辅音频数据不进行同步处理,所述能力集合包括支持对辅音频数据不进行同步处理或支持对辅音频数据进行同步处理;
    若所述宿设备通过所述HDMI接口中预先存储的能力集合确定支持对所述辅音频数据不进行同步处理,则所述宿设备将包括有所述第一标识的所述模式信息发送给所述源设备。
  7. 根据权利要求5或6所述的方法,其特征在于,所述宿设备对所述主音频数据和所述视频数据进行同步处理之前,所述方法还包括:
    所述宿设备接收所述源设备发送的响应信息,所述响应信息用于指示所述宿设备对从所述源设备接收到的所述辅音频数据不进行同步处理。
  8. 根据权利要求5至7任一项所述的方法,其特征在于,所述宿设备通过高清晰度多媒体接口HDMI向源设备发送模式信息之前,所述方法还包括:
    所述宿设备配置所述模式信息,所述模式信息还包括第二标识,所述第二标识用于指示所述源设备对从所述外接设备接收的辅音频数据进行同步处理,所述方法还包括:
    若所述源设备不支持对所述辅音频数据不进行同步处理,则所述宿设备接收所述源设备通过所述HDMI接口发送的经过同步处理的所述辅音频数据、所述主音频数据以及所述视频数据;
    所述宿设备对所述视频数据、所述主音频数据和所述辅音频数据进行同步 处理;
    所述宿设备播放所述经过同步处理的所述辅音频数据、所述主音频数据和所述视频数据。
  9. 一种源设备,其特征在于,包括高清晰度多媒体接口HDMI、处理器以及存储器;
    所述HDMI接口,用于接收宿设备发送的模式信息,所述模式信息包括第一标识,所述第一标识用于指示所述源设备对从外接设备接收的辅音频数据不进行同步处理;
    所述处理器,用于判断源设备自身是否支持对所述辅音频数据不进行同步处理;
    所述存储器,用于存储视频数据和主音频数据;
    所述处理器还用于,若所述源设备支持对所述辅音频数据不进行同步处理,则所述处理器对所述视频数据和所述主音频数据进行同步处理;
    所述HDMI接口还用于,将未经过同步处理的所述辅音频数据和经过同步处理的所述主音频数据和所述视频数据发送给所述宿设备。
  10. 根据权利要求9所述的源设备,其特征在于,
    所述HDMI接口还用于,预先存储有能力集合,所述能力集合包括支持对辅音频数据不进行同步处理或支持对辅音频数据进行同步处理;
    所述处理器还用于,通过所述HDMI接口中预先存储的所述能力集合确定是否支持对所述辅音频数据不进行同步处理。
  11. 根据权利要求9或10所述的源设备,其特征在于,所述HDMI接口还用于,若所述源设备支持对所述辅音频数据不进行同步处理,则向所述宿设备发送响应信息,所述响应信息用于指示所述宿设备对从所述源设备接收到的所述辅音频数据不进行同步处理。
  12. 根据权利要求9至11任一项所述的源设备,其特征在于,所述模式信息还包括第二标识,所述第二标识用于指示所述源设备对从所述外接设备接收的辅音频数据进行同步处理;
    所述处理器还用于,若所述源设备不支持对所述辅音频数据不进行同步处 理,则对所述视频数据、所述主音频数据和所述辅音频数据进行同步处理;
    所述HDMI接口还用于,将经过同步处理的所述辅音频数据、所述主音频数据以及所述视频数据发送给所述宿设备。
  13. 一种宿设备,其特征在于,包括高清晰度多媒体接口HDMI、处理器以及播放器;
    所述HDMI接口,用于向源设备发送模式信息,所述模式信息包括第一标识,所述第一标识用于指示所述源设备对从外接设备接收的辅音频数据不进行同步处理;
    所述HDMI接口还用于,接收所述源设备发送的未经过同步处理的所述辅音频数据和经过同步处理的主音频数据和视频数据;
    所述处理器,用于对所述主音频数据和所述视频数据进行同步处理;
    所述播放器,用于播放所述未经同步处理的所述辅音频数据和经过同步处理的主音频数据和视频数据。
  14. 根据权利要求13所述的宿设备,其特征在于,
    所述HDMI接口还用于,预先存储有能力集合,所述能力集合包括支持对辅音频数据不进行同步处理或支持对辅音频数据进行同步处理;
    所述处理器还用于,通过所述HDMI接口中预先存储的所述能力集合确定是否支持对所述辅音频数据不进行同步处理;
    所述HDMI接口还用于,若所述处理器通过所述能力集合确定支持对所述辅音频数据不进行同步处理,则将包括有所述第一标识的所述模式信息发送给所述源设备。
  15. 根据权利要求13或14所述的宿设备,其特征在于,
    所述HDMI接口还用于,接收所述源设备发送的响应信息,所述响应信息用于指示所述宿设备对从所述源设备接收到的所述辅音频数据不进行同步处理。
  16. 根据权利要求13至15任一项所述的宿设备,其特征在于,
    所述处理器还用于,配置所述模式信息,所述模式信息还包括第二标识,所述第二标识用于指示所述源设备对从所述外接设备接收的辅音频数据进行 同步处理;
    所述HDMI接口还用于,若所述源设备不支持对所述辅音频数据不进行同步处理,则接收所述源设备通过所述HDMI接口发送的经过同步处理的所述辅音频数据、所述主音频数据以及所述视频数据;
    所述处理器还用于,对所述视频数据、所述主音频数据和所述辅音频数据进行同步处理;
    所述播放器还用于,播放所述经过同步处理的所述辅音频数据、所述主音频数据和所述视频数据。
PCT/CN2016/105442 2016-03-04 2016-11-11 一种音视频处理方法以及相关设备 WO2017148178A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610125958.2 2016-03-04
CN201610125958.2A CN105791937A (zh) 2016-03-04 2016-03-04 一种音视频处理方法以及相关设备

Publications (1)

Publication Number Publication Date
WO2017148178A1 true WO2017148178A1 (zh) 2017-09-08

Family

ID=56386974

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/105442 WO2017148178A1 (zh) 2016-03-04 2016-11-11 一种音视频处理方法以及相关设备

Country Status (2)

Country Link
CN (1) CN105791937A (zh)
WO (1) WO2017148178A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105791937A (zh) * 2016-03-04 2016-07-20 华为技术有限公司 一种音视频处理方法以及相关设备
CN109688460B (zh) * 2018-12-24 2021-05-18 深圳创维-Rgb电子有限公司 一种数字电视画面的辅音输出方法、数字电视及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1819707A (zh) * 2005-02-08 2006-08-16 上海渐华科技发展有限公司 卡拉ok麦克风
CN101098523A (zh) * 2006-06-29 2008-01-02 海尔集团公司 一种手机实现卡拉ok的方法及具有卡拉ok功能的手机
JP2011197344A (ja) * 2010-03-19 2011-10-06 Yamaha Corp サーバ
CN103268763A (zh) * 2013-06-05 2013-08-28 广州市花都区中山大学国光电子与通信研究院 一种基于同步音频提取和实时传输的无线影音系统
CN105791937A (zh) * 2016-03-04 2016-07-20 华为技术有限公司 一种音视频处理方法以及相关设备

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8655156B2 (en) * 2010-03-02 2014-02-18 Cisco Technology, Inc. Auxiliary audio transmission for preserving synchronized playout with paced-down video
CN103179451B (zh) * 2013-03-19 2016-04-20 深圳市九洲电器有限公司 基于dvb标准的双音频混合输出方法、装置及机顶盒

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1819707A (zh) * 2005-02-08 2006-08-16 上海渐华科技发展有限公司 卡拉ok麦克风
CN101098523A (zh) * 2006-06-29 2008-01-02 海尔集团公司 一种手机实现卡拉ok的方法及具有卡拉ok功能的手机
JP2011197344A (ja) * 2010-03-19 2011-10-06 Yamaha Corp サーバ
CN103268763A (zh) * 2013-06-05 2013-08-28 广州市花都区中山大学国光电子与通信研究院 一种基于同步音频提取和实时传输的无线影音系统
CN105791937A (zh) * 2016-03-04 2016-07-20 华为技术有限公司 一种音视频处理方法以及相关设备

Also Published As

Publication number Publication date
CN105791937A (zh) 2016-07-20

Similar Documents

Publication Publication Date Title
US7295548B2 (en) Method and system for disaggregating audio/visual components
TW200806050A (en) Method and system for synchronizing audio and video data signals
WO2020182020A1 (zh) 一种音频信号播放方法及显示设备
JP2010112981A (ja) 遠隔実演再生方法、装置
CN103119952B (zh) 处理多媒体流的方法以及相应设备
US9979766B2 (en) System and method for reproducing source information
WO2017148178A1 (zh) 一种音视频处理方法以及相关设备
JP6720566B2 (ja) オーディオ機器
JP6956354B2 (ja) 映像信号出力装置、制御方法、及び、プログラム
WO2015131591A1 (zh) 音频信号输出方法、装置、终端及系统
JP7452526B2 (ja) 送信装置、送信方法、受信装置および受信方法
US10917465B2 (en) Synchronization setting device and distribution system
WO2017101334A1 (zh) 一种专用接口转hdmi接口的方法和装置
WO2019225448A1 (ja) 送信装置、送信方法、受信装置および受信方法
TWI816071B (zh) 音訊轉換裝置及音訊處理方法
WO2019225449A1 (ja) 送信装置、送信方法、受信装置および受信方法
JP2002157867A (ja) オーディオシステム及びオーディオ装置の制御方法
TWI814427B (zh) 影音同步方法
WO2021049181A1 (ja) 送信装置、送信方法、受信装置および受信方法
JP2002271769A (ja) インターネットによる講演会のビデオ配信システム
WO2024100920A1 (ja) 情報処理装置、情報処理方法及び情報処理用プログラム
KR20160137021A (ko) 음악 및 화상 강의시스템
Murray Investigating Low-Cost Platforms for Ultra-Low-Latency Audio Streaming in Networked Music Performances
TW201041391A (en) Playback system and method synchronizing audio and video signals
TW201820171A (zh) 無線影音接收麥克風匯流系統

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16892357

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 16892357

Country of ref document: EP

Kind code of ref document: A1