CN116887133A - Main audio equipment and audio data transmission method - Google Patents

Main audio equipment and audio data transmission method Download PDF

Info

Publication number
CN116887133A
CN116887133A CN202310793905.8A CN202310793905A CN116887133A CN 116887133 A CN116887133 A CN 116887133A CN 202310793905 A CN202310793905 A CN 202310793905A CN 116887133 A CN116887133 A CN 116887133A
Authority
CN
China
Prior art keywords
target
data
audio
audio data
control class
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310793905.8A
Other languages
Chinese (zh)
Inventor
王相祥
肖劲立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xinyang International Trade Co ltd
Original Assignee
Shenzhen Xinyang International Trade Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xinyang International Trade Co ltd filed Critical Shenzhen Xinyang International Trade Co ltd
Priority to CN202310793905.8A priority Critical patent/CN116887133A/en
Publication of CN116887133A publication Critical patent/CN116887133A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/12Circuits for transducers, loudspeakers or microphones for distributing signals to two or more loudspeakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/65Network streaming protocols, e.g. real-time transport protocol [RTP] or real-time control protocol [RTCP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups

Abstract

The disclosure relates to a main audio device and an audio data transmission method, which are applied to the technical field of electroacoustic and are used for solving the problem that the configuration of the main audio device is inflexible at present. The main audio device includes: a controller configured to: under the condition that the peripheral equipment is detected to be accessed to the audio equipment through the USB interface, determining whether the peripheral equipment is a target wireless equipment or not; under the condition that the peripheral equipment is determined to be the target wireless equipment, the USB interface is controlled to be expanded into a target I2S interface; performing packet processing on target audio data and target control class data to obtain a target data packet, wherein the target audio data comprises audio data of N channels, and the target control class data comprises control class data of N channels; based on the target I2S interface, the target data packet is sent to N slave audio devices through the peripheral device, so that the slave audio devices analyze the received target data packet, and based on control class data of the corresponding sound channel, the audio data of the corresponding sound channel obtained through analysis are played.

Description

Main audio equipment and audio data transmission method
Technical Field
Embodiments of the present application relate to electroacoustic technology. And more particularly, to a main audio device and an audio data transmission method.
Background
At present, a bluetooth module, a WIFI module, a wireless module and the like (hereinafter collectively referred to as a connection module) are built in a master audio device (such as a master sound box), then the master audio device is connected with other slave audio devices through the connection module, and the master audio device and the other slave audio devices play audio of different channels simultaneously, so that the master audio device has the function of expanding panoramic sound playing effect. That is, the main audio device with the built-in connection module has a function of expanding the panoramic sound playing effect, while the main audio device without the built-in connection module does not have a function of expanding the panoramic sound playing effect.
Therefore, whether the main audio equipment has the function of expanding the panoramic sound playing effect is determined by whether the connecting module is built in or not, the configuration of the main audio equipment is fixed, and the flexibility is lacking.
Disclosure of Invention
In order to solve the technical problems or at least partially solve the technical problems, the embodiment of the application provides a main audio device and an audio data transmission method, which can enable the main audio device to have the function of expanding panoramic sound playing effect without a built-in connection module, and improve the flexibility of configuration of the main audio device.
In a first aspect, an embodiment of the present application provides a main audio device, including:
A controller configured to: under the condition that the peripheral equipment is detected to be accessed to the audio equipment through the USB interface, determining whether the peripheral equipment is a target wireless equipment or not; under the condition that the peripheral equipment is determined to be the target wireless equipment, controlling the USB interface to be expanded into a target I2S interface; performing packet processing on target audio data and target control class data to obtain a target data packet, wherein the target audio data comprises audio data of N channels, and the target control class data comprises control class data of the N channels; based on the target I2S interface, the target data packet is sent to N slave audio devices through the peripheral device, so that each slave audio device analyzes the received target data packet, and based on control class data of a corresponding channel obtained through analysis, audio data of the corresponding channel obtained through analysis is played, and the N slave audio devices are in one-to-one correspondence with the N channels.
In a second aspect, an embodiment of the present application provides an audio data transmission method, applied to a primary audio device, including: under the condition that the peripheral equipment is detected to be accessed to the audio equipment through the USB interface, determining whether the peripheral equipment is a target wireless equipment or not; under the condition that the peripheral equipment is determined to be the target wireless equipment, controlling the USB interface to be expanded into a target I2S interface; performing packet processing on target audio data and target control class data to obtain a target data packet, wherein the target audio data comprises audio data of N channels, and the target control class data comprises control class data of the N channels; based on the target I2S interface, the target data packet is sent to N slave audio devices through the peripheral device, so that each slave audio device analyzes the received target data packet, and based on control class data of a corresponding channel obtained through analysis, audio data of the corresponding channel obtained through analysis is played, and the N slave audio devices are in one-to-one correspondence with the N channels.
In a third aspect, an embodiment of the present application provides a computer-readable storage medium, including: the computer-readable storage medium stores thereon a computer program which, when executed by a processor, implements the audio data transmission method as shown in the second aspect.
In a fourth aspect, embodiments of the present application provide a computer program product comprising: the computer program product, when run on a computer, causes the computer to implement the audio data transmission method as shown in the second aspect.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages: in the embodiment of the application, under the condition that the peripheral equipment is detected to be accessed to the audio equipment through the USB interface, whether the peripheral equipment is a target wireless equipment or not is determined; under the condition that the peripheral equipment is determined to be the target wireless equipment, controlling the USB interface to be expanded into a target I2S interface; performing packet processing on target audio data and target control class data to obtain a target data packet, wherein the target audio data comprises audio data of N channels, and the target control class data comprises control class data of the N channels; based on the target I2S interface, the target data packet is sent to N slave audio devices through the peripheral device, so that each slave audio device analyzes the received target data packet, and based on control class data of a corresponding channel obtained through analysis, audio data of the corresponding channel obtained through analysis is played, and the N slave audio devices are in one-to-one correspondence with the N channels. In this way, the main audio device provided by the embodiment of the application has no built-in connection module, and in the case that the target wireless device is detected to be accessed to the main audio device through the USB interface, the target wireless device is used for transmitting the target data packet comprising the target audio data and the target control type data to the slave audio device through expanding the USB interface to the I2S interface, then the slave audio device acquires the audio data and the control type data of the corresponding sound channel based on the received target data packet, and plays the audio data of the corresponding sound channel based on the control type data of the corresponding sound channel, thereby realizing that the main audio device is connected with the slave audio device through the target wireless device, realizing that the main audio device and the slave audio device play the audio of different sound channels simultaneously, achieving the panoramic sound effect, further, the main audio device has the function of expanding the panoramic sound playing effect under the condition that the connection module is not built-in, and improving the configuration flexibility of the main audio device.
Drawings
In order to more clearly illustrate the embodiments of the present application or the implementation of the related art, the drawings that are required for the embodiments or the related art description will be briefly described, and it is apparent that the drawings in the following description are some embodiments of the present application and that other drawings may be obtained according to these drawings for a person having ordinary skill in the art.
FIG. 1 illustrates an operational scenario between a master audio device and a slave audio device according to some embodiments;
FIG. 2 illustrates a hardware configuration block diagram of a display device according to some embodiments;
FIG. 3 illustrates a hardware configuration block diagram of a master audio device in accordance with some embodiments;
FIG. 4 illustrates a schematic diagram of a master audio device communicating with slave audio devices via USB Dongle devices in accordance with some embodiments;
FIG. 5 illustrates a pin translation schematic of an extension of the USB type-c interface to an I2S interface, in accordance with some embodiments;
FIG. 6 illustrates a flow control diagram of an extension of the USB type-c interface to an I2S interface, in accordance with some embodiments;
FIG. 7 illustrates one of the flow diagrams of an audio data transmission method according to some embodiments;
FIG. 8 illustrates a second flow chart of a method of audio data transmission according to some embodiments;
FIG. 9 illustrates a third flow chart of a method of audio data transmission according to some embodiments;
FIG. 10 illustrates a fourth flow chart of a method of audio data transmission, according to some embodiments;
FIG. 11 illustrates fifth flow diagram of an audio data transmission method according to some embodiments;
FIG. 12 illustrates a sixth flow chart of a method of audio data transmission, in accordance with some embodiments;
fig. 13 illustrates a seventh flow chart of a method of audio data transmission according to some embodiments.
Detailed Description
For the purposes of making the objects and embodiments of the present application more apparent, an exemplary embodiment of the present application will be described in detail below with reference to the accompanying drawings in which exemplary embodiments of the present application are illustrated, it being apparent that the exemplary embodiments described are only some, but not all, of the embodiments of the present application.
It should be noted that the brief description of the terminology in the present application is for the purpose of facilitating understanding of the embodiments described below only and is not intended to limit the embodiments of the present application. Unless otherwise indicated, these terms should be construed in their ordinary and customary meaning.
The terms first, second, third and the like in the description and in the claims and in the above-described figures are used for distinguishing between similar or similar objects or entities and not necessarily for describing a particular sequential or chronological order, unless otherwise indicated. It is to be understood that the terms so used are interchangeable under appropriate circumstances.
The terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to all elements explicitly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
Fig. 1 is a schematic view of a scene of a display device 100 and main audio devices 200 and left surround audio devices 300 and right surround audio devices 400, and in general, the main audio devices 200 and the left surround audio devices 300 and the right surround audio devices 400 are peripheral audio devices of the display device 100, and output audio in the display device to achieve a panoramic sound effect. The main audio device 200 and the left surround audio device 300 and the right surround audio device 400 may also be used as audio players alone to play audio data downloaded from a network or stored locally, thereby realizing a panoramic sound effect.
The display device provided by the embodiment of the application can have various implementation forms, for example, a television, an intelligent television, a laser projection device, a display (monitor), an electronic whiteboard (electronic bulletin board), an electronic desktop (electronic table), a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device and the like.
As shown in fig. 2, the display apparatus 100 includes at least one of a modem 110, a communicator 120, a detector 130, an external device interface 140, a controller 150, a display 160, an audio output interface 170, a user interface 180, an external memory, and a power supply.
In some embodiments the controller includes a processor, a video processor, an audio processor, a graphics processor, RAM, ROM, a first interface for input/output to an nth interface.
The display 160 includes a display screen component for presenting a picture, and a driving component for driving image display, a component for receiving an image signal from the controller output, displaying video content, image content, and a menu manipulation interface, and a user manipulation UI interface.
The display 160 may be a liquid crystal display, an OLED display, a projection device, or a projection screen.
The communicator 120 is a component for communicating with external devices or servers according to various communication protocol types. For example: the communicator may include at least one of a Wifi module, a bluetooth module, a wired ethernet module, or other network communication protocol chip or a near field communication protocol chip, and an infrared receiver. The display apparatus 100 may establish transmission and reception of control signals and data signals with an external control device or a server through the communicator 120.
The user interface 180 may be used to receive control signals from the control device 100 (e.g., an infrared remote control, etc.). Or may be used to directly receive user input operation instructions and convert the operation instructions into instructions recognizable and responsive by the display device 100, which may be referred to as a user input interface.
The detector 130 is used to collect signals of the external environment or interaction with the outside. For example, the detector 130 includes a light receiver, a sensor for collecting the intensity of ambient light; alternatively, the detector 130 includes an image collector such as a camera, which may be used to collect external environmental scenes, attributes of a user, or user interaction gestures, or alternatively, the detector 130 includes a sound collector such as a microphone, etc. for receiving external sounds.
The external device interface 140 may include, but is not limited to, the following: high Definition Multimedia Interface (HDMI), analog or data high definition component input interface (component), composite video input interface (CVBS), USB input interface (USB), RGB port, etc. The input/output interface may be a composite input/output interface formed by a plurality of interfaces.
The modem 110 receives broadcast television signals through a wired or wireless reception manner, and demodulates audio and video signals, such as EPG data signals, from a plurality of wireless or wired broadcast television signals.
In some embodiments, the controller 150 and the modem 110 may be located in separate devices, i.e., the modem 110 may also be located in an external device to the host device in which the controller 150 is located, such as an external set-top box or the like.
The controller 150 controls the operation of the display device and responds to the user's operations through various software control programs stored on a memory (internal memory or external memory). The controller 150 controls the overall operation of the display apparatus 100. For example: in response to receiving a user command for selecting a UI object to be displayed on the display 160, the controller 150 may perform an operation related to the object selected by the user command.
In some embodiments, the controller includes at least one of a central processing unit (Central Processing Unit, CPU), a video processor, an audio processor, a graphics processor (Graphics Processing Unit, GPU), and a random access Memory (Random Access Memory, RAM), a Read-Only Memory (ROM), a first interface to an nth interface for input/output, a communication Bus (Bus), and the like.
The RAM is also called as a main memory and is an internal memory for directly exchanging data with the controller. It can be read and written at any time (except when refreshed) and is fast, often as a temporary data storage medium for an operating system or other program in operation. The biggest difference from ROM is the volatility of the data, i.e. the stored data will be lost upon power down. RAM is used in computer and digital systems to temporarily store programs, data, and intermediate results. ROM operates in a non-destructive read mode, and only information which cannot be written can be read. The information is fixed once written, and even if the power supply is turned off, the information is not lost, so the information is also called a fixed memory.
The user may input a user command through a Graphical User Interface (GUI) displayed on the display 160, and the user input interface receives the user input command through the Graphical User Interface (GUI). Alternatively, the user may input the user command by inputting a specific sound or gesture, and the user input interface recognizes the sound or gesture through the sensor to receive the user input command.
A "user interface" is a media interface for interaction and exchange of information between an application or operating system and a user, which enables conversion between an internal form of information and a user-acceptable form. A commonly used presentation form of the user interface is a graphical user interface (Graphic User Interface, GUI), which refers to a user interface related to computer operations that is displayed in a graphical manner. It may be an interface element such as an icon, a window, a control, etc. displayed in a display screen of the display device, where the control may include a visual interface element such as an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget, etc.
As shown in fig. 3, the main audio device 200 includes a controller 210, a USB interface 220, a user input/output interface 230, an external memory, a power supply, and the like. The main audio device 200 may implement the audio data transmission method provided in the embodiment of the present application.
The controller 210 controls the operation of the main audio device and responds to the user's operations through various software control programs stored on a memory (internal memory or external memory). The controller 210 controls the overall operation of the main audio device 200.
In some embodiments the controller includes at least one of an audio processor, a random access Memory (Random Access Memory, RAM), a Read-Only Memory (ROM), a first to nth interface for input/output, a communication Bus (Bus), and the like.
The RAM is also called as a main memory and is an internal memory for directly exchanging data with the controller. It can be read and written at any time (except when refreshed) and is fast, often as a temporary data storage medium for an operating system or other program in operation. The biggest difference from ROM is the volatility of the data, i.e. the stored data will be lost upon power down. RAM is used in computer and digital systems to temporarily store programs, data, and intermediate results. ROM operates in a non-destructive read mode, and only information which cannot be written can be read. The information is fixed once written, and even if the power supply is turned off, the information is not lost, so the information is also called a fixed memory.
The controller 210, which may be a micro control unit (Microcontroller Unit, MCU), also called a single-chip microcomputer (Single Chip Microcomputer) or a single-chip microcomputer, is a chip-level computer formed by properly reducing the frequency and specification of a central processing unit (Central Process Unit; CPU), and integrating peripheral interfaces such as a memory (memory), a counter (Timer), a USB, an A/D conversion, UART, PLC, DMA, etc., even an LCD driving circuit on a single chip.
The user input/output interface 230 includes a microphone, touch pad, sensors, keys, etc.
USB interface 220 includes a USB type-c interface, a USB type-B interface, and so on.
To achieve panoramic sound effects, a master audio device and a plurality of slave audio devices are required to simultaneously sound, and audio of different channels is respectively output.
The types of traditional main audio equipment comprise main audio equipment with a built-in connection module and main audio equipment without a built-in connection module, wherein the main audio equipment with the built-in connection module has the functions of expanding and connecting with other auxiliary audio equipment and then playing panoramic sound audio (hereinafter referred to as expanding function), and the main audio equipment without the built-in connection module does not have the functions of expanding and connecting with other auxiliary audio equipment and then playing panoramic sound audio. In addition, the main audio equipment with the built-in connection module is generally fixed and has an expansion function, the expansion function cannot be removed (a user cannot operate), while the main audio equipment without the built-in connection module is generally fixed and does not have an expansion function, and the expansion function cannot be increased, that is, the main audio equipment is fixed at present and lacks flexibility.
At present, the scheme is that the main audio equipment with the built-in connection module is realized by a mode that the secondary audio equipment does not generate sound through software control, however, the mode of software control is complex, and a user is not easy to operate. And the built-in connection module is generally high in cost, resulting in high cost of the main audio device with the built-in connection module. Therefore, general users select the main audio equipment without the built-in connection module, but the main audio equipment without the built-in connection module has low cost, but does not have the expansion function, and can not meet the requirement of users on the expansion function at times.
The WiFi has larger connection bandwidth and smaller delay, but the WiFi module has high cost; the Bluetooth module has low cost relative to WiFi, but has narrow Bluetooth bandwidth, low speed and larger delay when the data volume of the audio data is larger. In order to solve the problem of larger delay, the bluetooth module generally performs lossy compression on audio data through a lossy compression algorithm, and although the data size of the audio data is effectively reduced, the compressed audio data is distorted, so that high-quality audio experience is affected. If the compressed audio data is ensured not to be distorted by sounding, the complexity requirement on the compression algorithm is high, the software development cost is increased, and the calculated amount during compression is increased.
The embodiment of the application provides a main audio device and an audio data transmission method, wherein the main audio device can realize the audio data transmission method provided by the embodiment of the application or a functional module or a functional entity in the main audio device can realize the audio data transmission method provided by the embodiment of the application. The main audio device includes: a controller corresponds to the controller 210 in fig. 3 described above.
Some embodiments of the application provide a main audio device comprising: a controller configured to: under the condition that the peripheral equipment is detected to be accessed to the audio equipment through the USB interface, determining whether the peripheral equipment is a target wireless equipment or not; under the condition that the peripheral equipment is determined to be the target wireless equipment, controlling the USB interface to be expanded into a target I2S interface; performing packet processing on target audio data and target control class data to obtain a target data packet, wherein the target audio data comprises audio data of N channels, and the target control class data comprises control class data of the N channels; based on the target I2S interface, the target data packet is sent to N slave audio devices through the peripheral device, so that each slave audio device analyzes the received target data packet, and based on control class data of a corresponding channel obtained through analysis, audio data of the corresponding channel obtained through analysis is played, and the N slave audio devices are in one-to-one correspondence with the N channels.
The USB interface may be an USB type-c interface, or an USB type-B interface, or any other type of USB interface, which is not limited herein.
The USB interface is expanded to be a target I2S interface, namely the USB interface is converted to be the target I2S interface.
The target wireless device can communicate with the slave audio device with the I2S interface through a wireless transmission mode of 2.4G or 5.8G and the like to transmit the target audio data packet.
The target wireless device may be a USB Dongle device, or may be another device that may communicate with a slave audio device with an I2S interface to transmit the target audio data packet, which is not limited herein.
The main audio device can detect whether a hot plug event occurs on the USB interface or not, and then determine whether a peripheral device accesses the USB interface or not.
Illustratively, as shown in fig. 4, the master audio device 200 transfers the target data packet to the USB Dongle device 500 by expanding the USB type-C interface to the target I2S interface, and the USB Dongle device 500 forwards the data to the wireless plug-in device 600 (including the slave audio device 300 and the slave audio device 400) with the I2S interface by means of 5.8G wireless transmission, so as to achieve the purpose that the master audio device connects with the slave audio device through the USB Dongle device 500. After receiving the target data packet from the audio device, parsing the target data packet (including unpacking the target data packet, then parsing the audio data of the channel corresponding to the audio device from the unpacked target audio data, and parsing the control class data of the channel corresponding to the audio device from the unpacked target control class data), and playing the audio data of the corresponding channel based on the parsed control class data of the corresponding channel.
The USB type-C interface is rich in pins, two pins are selected for I2S communication through a pin multiplexing mechanism, and after a hot plug event occurs, the I2S communication is triggered to acquire a device identifier, so that the peripheral device accessing the USB type-C interface is identified as a target wireless device, and the three-state gate at the rear end is controlled to be switched to the I2S interface. As shown in fig. 5, the normal USB d+ pin function is converted into dongl2s_lrck (clock signal); converting the common USB D-pin function into a Dongle I2S_BCK, and using the USB D-as a frame clock; USB D+ is very suitable for transmitting left and right channel data, LRCK and BCK are in a differential mode, and noise interference on data is reduced.
Illustratively, as shown in fig. 6, when the USB interface is powered on, the MCU master control integrated circuit (Integrated Circuit, IC) chip software implements peripheral power initialization and waits for a hot plug event to occur, and after detecting that the peripheral device is plugged in, starts I2S communication to read the device identifier of the peripheral device, and then determines whether the peripheral device is a target wireless device according to the device identifier. If the peripheral equipment is determined to be not the target wireless equipment, returning to monitor the plugging event state continuously, if the peripheral equipment is determined to be the target wireless equipment, controlling the corresponding tristate gate port, switching the USB interface into the target I2S interface, and then switching the USB communication link into the I2S communication link so as to realize the purpose that the main audio equipment performs I2S communication with a plurality of slave audio equipment through the peripheral equipment.
In the embodiment of the application, the target audio data and the target control class data comprise data of a plurality of channels, each channel corresponds to one slave audio device, after the slave audio device unpacks the target data packet to obtain the target audio data, the audio data of the channel corresponding to the slave audio device is parsed from the target audio data, after the target data packet unpacks the target control class data, the control class data of the channel corresponding to the slave audio device is parsed from the target control class data, and then the audio data of the corresponding channel is played based on the control class data of the corresponding channel.
In the embodiment of the application, the USB type-C interface is expanded, so that the communication of any plug-in equipment carrying the I2S interface can be supported in an expanding way, the purpose of one-port and multi-use is achieved, and the expansion capability of the interface can be enriched.
In the embodiment of the application, under the condition that the peripheral equipment is detected to be accessed to the audio equipment through the USB interface, whether the peripheral equipment is a target wireless equipment or not is determined; under the condition that the peripheral equipment is determined to be the target wireless equipment, controlling the USB interface to be expanded into a target I2S interface; performing packet processing on target audio data and target control class data to obtain a target data packet, wherein the target audio data comprises audio data of N channels, and the target control class data comprises control class data of the N channels; based on the target I2S interface, the target data packet is sent to N slave audio devices through the peripheral device, so that each slave audio device analyzes the received target data packet, and based on control class data of a corresponding channel obtained through analysis, audio data of the corresponding channel obtained through analysis is played, and the N slave audio devices are in one-to-one correspondence with the N channels. In this way, the main audio device provided by the embodiment of the application has no built-in connection module, and in the case that the target wireless device is detected to be accessed to the main audio device through the USB interface, the target wireless device is used for transmitting the target data packet comprising the target audio data and the target control type data to the slave audio device through expanding the USB interface to the I2S interface, then the slave audio device acquires the audio data and the control type data of the corresponding sound channel based on the received target data packet, and plays the audio data of the corresponding sound channel based on the control type data of the corresponding sound channel, thereby realizing that the main audio device is connected with the slave audio device through the target wireless device, realizing that the main audio device and the slave audio device play the audio of different sound channels simultaneously, achieving the panoramic sound effect, further, the main audio device has the function of expanding the panoramic sound playing effect under the condition that the connection module is not built-in, and improving the configuration flexibility of the main audio device.
In some embodiments of the present application, it may be determined whether the master audio device establishes a connection with N slave audio devices before the target audio data and the target control class data are packetized. If the main audio device is determined to be connected with the N auxiliary audio devices, acquiring target audio data and target control class data corresponding to the N auxiliary audio devices, and performing packet processing on the target audio data and the target control class data to obtain a target data packet. If the main audio device is not connected with the N auxiliary audio devices, connection is established with the N auxiliary audio devices through the peripheral device, then target audio data and target control class data corresponding to the N auxiliary audio devices are obtained, and packet processing is carried out on the target audio data and the target control class data to obtain a target data packet.
In some embodiments of the application, the controller is further configured to: before the target audio data and the target control class data are subjected to packet processing to obtain a target data packet, determining the N slave audio devices which are connected with the master audio device through the peripheral device; the target audio data and the target control class data corresponding to the N slave audio devices are acquired.
After the system starts to work, detecting that the accessed peripheral equipment is a target wireless equipment, switching the USB interface into an I2S interface, detecting whether the slave audio equipment is on line or not through the peripheral equipment, when the slave audio equipment is on line, initiating handshake verification (the handshake verification method is not limited by the embodiment of the application) between the slave audio equipment and the master audio equipment, and after the handshake verification is successful, establishing connection between the master audio equipment and the slave audio equipment through the peripheral equipment (similarly, establishing connection between the master audio equipment and N slave audio equipment through the peripheral equipment). The master audio device starts to communicate with the slave audio devices through the peripheral devices, the master audio device and the slave audio devices negotiate ID information, the ID information comprises contents such as manufacturers, device types and the like, and after the ID information is determined, the master device acquires target audio data and target control type data corresponding to the N slave audio devices.
In the embodiment of the application, the N slave audio devices which are connected with the master audio device through the peripheral device are determined; the target audio data and the target control class data corresponding to the N slave audio devices are acquired, so that the target audio data and the target control class data included in the target data packet can be ensured to correspond to the N slave audio devices.
In some embodiments of the application, the controller is further configured to: before acquiring the target audio data corresponding to the N slave audio devices, acquiring audio data to be transmitted; determining whether the audio data to be transmitted is audio data in a target channel format, wherein the audio data in the target channel format comprises N channels; the controller is specifically configured to: in the case that the audio data to be transmitted is the audio data of the target channel format, determining the audio data to be transmitted as the target audio data; and under the condition that the audio data to be sent is not the audio data in the target channel format, converting the audio data to be sent into the audio data in the target channel format to obtain the target audio data.
It will be understood that after the master audio device obtains the audio data to be transmitted, it determines whether the audio data to be transmitted is audio data in the target channel format (the audio data in the target channel format corresponds to N slave audio devices), if the audio data to be transmitted is audio data in the target channel format, it determines the audio data to be transmitted as target audio data, and if the audio data to be transmitted is not audio data in the target channel format, it needs to convert the audio data to be transmitted into audio data in the target channel format, so as to obtain the target audio data.
The specific method of converting the channel format of the audio data may refer to the related art, and is not limited herein.
In the embodiment of the application, under the condition that the audio data to be sent is not the audio data in the target channel format, the audio data to be sent is converted into the audio data in the target channel format to obtain the target audio data, and the one-to-one correspondence between the channel number included in the target audio data and N slave audio devices can be ensured, so that each slave audio device can analyze the audio data in the corresponding channel from the target data packet to play the audio data.
Similarly, in the embodiment of the present application, when the main audio device acquires the control class data to be transmitted, it needs to determine whether the control class data to be transmitted includes control class data of N channels, where each channel corresponds to one control class data. If the control class data to be transmitted is determined to comprise the control class data of N channels, the control class data to be transmitted is determined to be target control class data; if the control class data to be transmitted does not comprise the control class data of the N channels, converting the control class data to be transmitted into control class data comprising the N channels; so as to ensure that each slave audio device can analyze the target data packet to obtain the control class data of the corresponding sound channel, and further play control is performed based on the control class data of the corresponding sound channel. The specific conversion method of the control class data to be transmitted may refer to the related art, and is not limited herein.
In some embodiments of the application, the controller is specifically configured to: under the condition that the audio data to be sent is the audio data in the first channel format, resampling the audio data to be sent to obtain the target audio data; under the condition that the audio data to be sent is the audio data in the second channel format, carrying out up-sampling processing on the audio data to be sent to obtain the target audio data; wherein, the number of channels of the audio data in the first channel format is greater than N, and the number of channels of the audio data in the second channel format is less than N.
The resampling process may be sampling frequency conversion (Sample Rate Converter, SRC) resampling process, or other resampling processes, which is not limited in the embodiment of the present application.
The upsampling process may be an SRC upsampling process, or may be another resampling process, which is not limited by the embodiment of the present application.
Wherein, the SRC resampling processing is to resample and encode the audio data according to the target sampling rate; the SRC up-sampling process is to interpolate the original audio data to generate audio data at the target sampling rate.
Example 1, n is 2, the audio data in the target channel format is the audio data in the binaural format. The audio data to be transmitted contains channel information of audio, and if it is determined that the audio data to be transmitted is audio data in a 2.1 channel (channel number is 3) time division multiplexed (Time Division Multiplexing, TDM) audio format, the audio data in the 2.1 channel TDM audio format is converted into audio data in a binaural pulse code modulation (Pulse Code Modulation, PCM) audio format by performing SRC resampling processing on the audio data to be transmitted; if it is determined that the audio data to be transmitted is audio data in a mono PCM audio format, the audio data in a binaural PCM audio format is converted by performing SRC up-sampling processing on the audio data to be transmitted.
In the embodiment of the application, the number of the channels included in the target audio data is ensured to be in one-to-one correspondence with N slave audio devices through judging the number of the channels of the audio data to be transmitted and converting the channel format of the audio data to be transmitted.
In some embodiments of the application, the controller is further configured to: before the target audio data and the target control class data are subjected to packet processing to obtain a target data packet, the target audio data are subjected to compression processing to obtain compressed target audio data; compressing the target control class data to obtain compressed target control class data; the controller is specifically configured to: and carrying out packet processing on the compressed target audio data and the compressed target control class data to obtain the target data packet.
In the embodiment of the application, before the target audio data and the target control class data are subjected to packet processing to obtain the target data packet, the target audio data and the target control class data are respectively subjected to compression processing, so that the data volume of transmission can be effectively reduced, the transmission speed is improved, and the transmission delay is reduced.
The specific method of the compression process may refer to the related art, and is not limited herein.
In some embodiments of the present application, the compression processing of the target audio data may be implemented by performing the filtering processing on the noise signal in the target audio data (filtering the noise data in the target audio data, reducing the data amount of the target audio data), or may be implemented by performing the compression encoding on the target audio data, and then implementing the compression processing on the target audio data, or may be implemented by performing the filtering processing on the noise signal in the target audio data, and then performing the compression encoding on the filtered target audio data (in this case, the compression rate of the target audio data may be effectively improved); the target audio data may also be compressed by other methods, which are not limited herein.
The noise signal, i.e., invalid data in the target audio data, may include a signal that is not related to the audio data in the target audio data, a signal that affects the hearing of the user too much, data that is not audible to the user with too little sound, and so on.
The compression coding algorithm comprises a lossless compression coding algorithm and a lossy compression coding algorithm, and the embodiment of the application is not limited to a specific compression coding algorithm.
In some embodiments of the application, the controller is specifically configured to: filtering the noise signals in the target audio data to obtain filtered target audio data; and carrying out lossless compression coding on the filtered target audio data to obtain compressed target audio data.
According to the embodiment of the application, the compression rate of the target audio data can be effectively improved, meanwhile, the lossless compression of the audio data can be ensured, after the target data packet is received by the slave device, the sound is restored through a lossless voice quality audio algorithm, and the high-quality low-delay lossless voice quality experience is ensured to be provided.
In some embodiments of the present application, the compression processing of the target control class data may be implemented by performing the filtering processing on the noise signal in the target control class data (filtering the noise data in the target control class data, reducing the data amount of the target control class data), or by performing the compression encoding on the target control class data, and then implementing the compression processing on the target control class data, or by performing the filtering processing on the noise signal in the target control class data, and then performing the compression encoding on the filtered target control class data, so as to implement the compression processing on the target control class data (in this case, the compression rate of the target control class data may be effectively improved); the target control class data may also be compressed by other methods, not limited herein.
The noise signal, i.e., invalid data in the target control class data, may include a signal, a null signal, etc. that is not related to the audio control class data, such as volume control, status control, indication, etc. control.
In some embodiments of the application, the controller is specifically configured to: filtering the noise signals in the target control class data to obtain filtered target control class data; and carrying out lossless compression coding on the filtered target control class data to obtain compressed target control class data.
According to the embodiment of the application, the compression rate of the target audio data can be effectively improved, meanwhile, the lossless compression of the control class data can be ensured, after the target data packet is received by the slave device, the control class data is restored through a lossless compression algorithm, so that the playing audio data can be better controlled to play, and the high-quality low-delay lossless tone quality experience is ensured to be provided.
Example 2, taking example 1 above, after acquiring the audio data in the binaural PCM audio format (target audio data), performs SRC sampling processing (filtering noise signals) on the audio data in the binaural PCM audio format, and then performs lossless compression encoding on the audio data in the binaural PCM audio format using a binaural audio compression algorithm (lossless compression algorithm, which may perform binaural PCM audio data compression encoding processing by calling a SubFrame function). The control class data contains information such as volume control, and is compressed similarly by using a similar audio compression algorithm (the control class data is compressed and encoded by calling a SubFrame function). And finally, packing the compressed target audio data and target control class data by using a data packing algorithm (a void audioTxEnc_ subFrame (void) function can be called to pack the audio data and the control data) to generate a wireless transmission target data packet, dividing the data into target audio data and target control class data by using a data unpacking algorithm by equipment receiving the target data packet, acquiring the audio data of a corresponding channel from the target audio data, acquiring the control class data of the corresponding channel from the target control class data, and playing the audio data of the corresponding channel based on the control class data of the corresponding channel.
In the embodiment of the application, the lossless compression algorithm, namely the lossless voice quality audio algorithm analyzes and compresses the audio class data and the control class data through the audio format, and carries out data group package on the compressed target audio data and the target control class data to obtain the target data package, and the target data package is wirelessly transmitted to realize the lossless transmission of the audio data.
For more detailed description of the present solution, the following description will be given by way of example with reference to fig. 7 to 13, and it will be understood that the steps referred to in fig. 7 to 13 may include more steps or fewer steps when actually implemented, and the order of the steps may also be different, so as to enable the audio data transmission method provided in the embodiment of the present application. The audio data transmission method is applied to a main audio device controlled by a control device, and an execution subject of the audio data transmission method may be the main audio device, or may be a functional module or a functional entity capable of implementing the audio data transmission method in the main audio device, which is not limited herein. In addition, the specific description of the audio data transmission method provided by the embodiment of the present application may refer to the description related to the main audio device, and may achieve the same or similar technical effects, which are not repeated herein.
Fig. 7 is a flowchart illustrating steps of a method for implementing audio data transmission, which may include the following S701 to S704, according to one or more embodiments of the present application, applied to a main audio device.
S701, under the condition that the peripheral equipment is detected to be accessed to the audio equipment through the USB interface, determining whether the peripheral equipment is a target wireless equipment or not.
S702, controlling to extend the USB interface to a target I2S interface under the condition that the peripheral equipment is determined to be a target wireless equipment.
S703, performing packet processing on target audio data and target control class data to obtain a target data packet, wherein the target audio data comprises audio data of N channels, and the target control class data comprises control class data of the N channels.
S704, based on the target I2S interface, the target data packet is sent to N slave audio devices through the peripheral device, so that each slave audio device analyzes the received target data packet, and based on control class data of a corresponding channel obtained through analysis, audio data of the corresponding channel obtained through analysis is played, and the N slave audio devices are in one-to-one correspondence with the N channels.
In some embodiments of the present application, as shown in fig. 8 in conjunction with fig. 7, before S703, the audio data transmission method provided in the embodiment of the present application may further include S705 and S706 described below.
S705, determining the N slave audio devices that establish a connection with the master audio device through the peripheral device.
S706, acquiring the target audio data and the target control class data corresponding to the N slave audio devices.
In some embodiments of the present application, as shown in fig. 9 in conjunction with fig. 8, before S706, the audio data transmission method provided in the embodiment of the present application may further include S707 and S708 described below, where S706 may be implemented specifically by S706a and S706b described below.
S707, obtaining the audio data to be transmitted.
S708, determining whether the audio data to be transmitted is audio data in a target channel format, wherein the audio data in the target channel format comprises N channels.
706a, determining the audio data to be transmitted as the target audio data in case the audio data to be transmitted is the audio data of the target channel format.
S706b, converting the audio data to be sent into the audio data in the target channel format to obtain the target audio data when the audio data to be sent is not the audio data in the target channel format.
In some embodiments of the present application, as shown in fig. 10 in conjunction with fig. 9, S706b may be specifically implemented by S706b1 and S706b2 described below.
S706b1, in the case where the audio data to be sent is the audio data in the first channel format, resampling the audio data to be sent to obtain the target audio data.
And S706b2, performing up-sampling processing on the audio data to be sent to obtain the target audio data when the audio data to be sent is the audio data in the second channel format.
Wherein, the number of channels of the audio data in the first channel format is greater than N, and the number of channels of the audio data in the second channel format is less than N.
In some embodiments of the present application, as shown in fig. 11 in conjunction with fig. 10, before S703, the audio data transmission method provided in the embodiment of the present application may further include S709 and S710 described below, where S703 may be specifically implemented by S703a described below.
S709, compressing the target audio data to obtain compressed target audio data.
S710, compressing the target control class data to obtain compressed target control class data.
S703a, performing packet processing on the compressed target audio data and the compressed target control class data to obtain the target data packet.
In some embodiments of the present application, as shown in fig. 12 in conjunction with fig. 11, the above S709 may be implemented specifically by the following S709a and S709 b.
S709a, filtering the noise signal in the target audio data to obtain filtered target audio data.
And S709b, carrying out lossless compression coding on the filtered target audio data to obtain compressed target audio data.
In some embodiments of the present application, as shown in fig. 13 in conjunction with fig. 12, the above S710 may be specifically implemented by the following S710a and S710 b.
S710a, filtering the noise signals in the target control class data to obtain the filtered target control class data.
S710b, performing lossless compression coding on the filtered target control class data to obtain compressed target control class data.
The embodiment of the application also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements each process executed by the above-mentioned audio data transmission method, and can achieve the same technical effects, and in order to avoid repetition, the description is omitted here.
The computer readable storage medium may be a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, an optical disk, or the like.
The present application provides a computer program product comprising: the computer program product, when run on a computer, causes the computer to implement the audio data transmission method described above.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the application.
The foregoing description, for purposes of explanation, has been presented in conjunction with specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the embodiments to the precise forms disclosed above. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles and the practical application, to thereby enable others skilled in the art to best utilize the embodiments and various embodiments with various modifications as are suited to the particular use contemplated.

Claims (10)

1. A main audio device, comprising:
a controller configured to: under the condition that the peripheral equipment is detected to be accessed to the audio equipment through the USB interface, determining whether the peripheral equipment is a target wireless equipment or not;
under the condition that the peripheral equipment is determined to be target wireless equipment, controlling the USB interface to be expanded into a target I2S interface;
performing packet processing on target audio data and target control class data to obtain a target data packet, wherein the target audio data comprises audio data of N channels, the target control class data comprises control class data of the N channels, and N is a positive integer;
based on the target I2S interface, the target data packet is sent to N slave audio devices through the peripheral device, so that each slave audio device analyzes the received target data packet, and based on control class data of a corresponding channel obtained through analysis, audio data of the corresponding channel obtained through analysis is played, and the N slave audio devices are in one-to-one correspondence with the N channels.
2. The master audio device of claim 1, wherein the controller is further configured to:
before the target audio data and the target control class data are subjected to packet processing to obtain a target data packet, determining the N slave audio devices which are connected with the master audio device through the peripheral device;
And acquiring the target audio data and the target control class data corresponding to the N slave audio devices.
3. The main audio device of claim 2, wherein the controller is further configured to:
before acquiring the target audio data corresponding to the N slave audio devices, acquiring audio data to be transmitted;
determining whether the audio data to be sent is audio data in a target channel format, wherein the audio data in the target channel format comprises N channels;
the controller is specifically configured to: determining the audio data to be transmitted as the target audio data in the case that the audio data to be transmitted is the audio data in the target channel format;
and under the condition that the audio data to be sent is not the audio data in the target channel format, converting the audio data to be sent into the audio data in the target channel format to obtain the target audio data.
4. The main audio device of claim 3, wherein the controller is specifically configured to:
under the condition that the audio data to be sent is the audio data in the first channel format, resampling the audio data to be sent to obtain the target audio data;
Under the condition that the audio data to be sent is the audio data in the second channel format, carrying out up-sampling processing on the audio data to be sent to obtain the target audio data;
the number of channels of the audio data in the first channel format is greater than N, and the number of channels of the audio data in the second channel format is less than N.
5. The main audio device of any of claims 1-4, wherein the controller is further configured to:
before the target audio data and the target control class data are subjected to packet processing to obtain a target data packet,
compressing the target audio data to obtain compressed target audio data;
compressing the target control class data to obtain compressed target control class data;
the controller is specifically configured to:
and carrying out packet processing on the compressed target audio data and the compressed target control class data to obtain the target data packet.
6. The main audio device of claim 5, wherein the controller is specifically configured to:
filtering noise signals in the target audio data to obtain filtered target audio data;
And carrying out lossless compression coding on the filtered target audio data to obtain compressed target audio data.
7. The main audio device of claim 5, wherein the controller is specifically configured to:
filtering noise signals in the target control class data to obtain filtered target control class data;
and carrying out lossless compression coding on the filtered target control class data to obtain compressed target control class data.
8. An audio data transmission method, applied to a main audio device, comprising:
under the condition that the peripheral equipment is detected to be accessed to the audio equipment through the USB interface, determining whether the peripheral equipment is a target wireless equipment or not;
under the condition that the peripheral equipment is determined to be target wireless equipment, controlling the USB interface to be expanded into a target I2S interface;
performing packet processing on target audio data and target control class data to obtain a target data packet, wherein the target audio data comprises audio data of N channels, and the target control class data comprises control class data of the N channels;
based on the target I2S interface, the target data packet is sent to N slave audio devices through the peripheral device, so that each slave audio device analyzes the received target data packet, and based on control class data of a corresponding channel obtained through analysis, audio data of the corresponding channel obtained through analysis is played, and the N slave audio devices are in one-to-one correspondence with the N channels.
9. The method of claim 8, wherein the grouping of the target audio data and the target control class data is performed before the obtaining the target data packet, the method further comprises:
compressing the target audio data to obtain compressed target audio data;
compressing the target control class data to obtain compressed target control class data;
the process of grouping the target audio data and the target control class data to obtain a target data packet includes:
and carrying out packet processing on the compressed target audio data and the compressed target control class data to obtain the target data packet.
10. A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, implements the audio data transmission method of claim 8 or 9.
CN202310793905.8A 2023-06-29 2023-06-29 Main audio equipment and audio data transmission method Pending CN116887133A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310793905.8A CN116887133A (en) 2023-06-29 2023-06-29 Main audio equipment and audio data transmission method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310793905.8A CN116887133A (en) 2023-06-29 2023-06-29 Main audio equipment and audio data transmission method

Publications (1)

Publication Number Publication Date
CN116887133A true CN116887133A (en) 2023-10-13

Family

ID=88267186

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310793905.8A Pending CN116887133A (en) 2023-06-29 2023-06-29 Main audio equipment and audio data transmission method

Country Status (1)

Country Link
CN (1) CN116887133A (en)

Similar Documents

Publication Publication Date Title
AU2019239357B2 (en) Data transmission device, and data transmission method
WO2019001347A1 (en) Screen projection method for mobile device, storage medium, terminal and screen projection system
US20190129681A1 (en) Wireless Screen Transmission Method, Extension Device, and Wireless Screen Transmission System
WO2017143663A1 (en) Electronic exchange system and configuration method therefor
US9609427B2 (en) User terminal apparatus, electronic device, and method for controlling the same
WO2017032081A1 (en) Audio and video playback device, data display method, and storage medium
US10049498B2 (en) Video conversion method, apparatus and system
WO2017101361A1 (en) Audio playback control device, video display device and audio and video playback system
KR20130017269A (en) The terminal display devices mirroring and remote control with smartphone's screen, video and audio
CN104883617A (en) Multi-screen interaction system and method
CN100590588C (en) Method and system for commutation from USB/PCIe to VGA/DVI
CN113253877A (en) Electronic whiteboard system and control method thereof
CN205490969U (en) Multimedia playing system based on mobile terminal
CN111741343A (en) Video processing method and device and electronic equipment
CN116887133A (en) Main audio equipment and audio data transmission method
WO2023274243A1 (en) Wireless access point connection method and apparatus, wireless access point configuration method and apparatus, and medium and device
CN115278323A (en) Display device, intelligent device and data processing method
CN110162284B (en) Control interface device and control system thereof
KR101187269B1 (en) Electronic apparatus for transmitting/receiving data using mhl interface and data transmitting/receiving method thereof
CN201585059U (en) Averager for weighing real-time data of anti-cheating video monitoring through weighing apparatus
JP2016127423A (en) Synchronization method of screen display, display processing device, and display processing program
CN113312227B (en) OTG function detection method, system, device and equipment
CN113727179B (en) Display equipment and method for enabling display equipment to be compatible with external equipment
CN211015468U (en) Computer device
KR20160067684A (en) Display apparatus and control methods thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination