CN115038021A - True wireless stereo headset, audio processing, lighting and vibrating method - Google Patents

True wireless stereo headset, audio processing, lighting and vibrating method Download PDF

Info

Publication number
CN115038021A
CN115038021A CN202210787328.7A CN202210787328A CN115038021A CN 115038021 A CN115038021 A CN 115038021A CN 202210787328 A CN202210787328 A CN 202210787328A CN 115038021 A CN115038021 A CN 115038021A
Authority
CN
China
Prior art keywords
control module
main control
module
light
audio data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210787328.7A
Other languages
Chinese (zh)
Other versions
CN115038021B (en
Inventor
黎山
刘辉
刘天伦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Numao Technology Co ltd
Original Assignee
Zhuhai Numao Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Numao Technology Co ltd filed Critical Zhuhai Numao Technology Co ltd
Priority to CN202210787328.7A priority Critical patent/CN115038021B/en
Publication of CN115038021A publication Critical patent/CN115038021A/en
Application granted granted Critical
Publication of CN115038021B publication Critical patent/CN115038021B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/033Headphones for stereophonic communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2420/00Details of connection covered by H04R, not provided for in its groups
    • H04R2420/07Applications of wireless loudspeakers or wireless microphones
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Abstract

The embodiment of the application discloses a real wireless stereo headset, an audio processing method, a light emitting method and a vibration method. The earphone includes first host system, first host system includes bluetooth module, the earphone still includes: the device comprises a first wireless communication module, a second main control module, an audio interface, a plurality of light-emitting units and a vibration module; the first main control module is used for processing the first audio data to obtain a first audio processing result; the second main control module is further configured to process second audio data to obtain a second audio processing result; the first main control module is used for acquiring a light-emitting instruction and controlling the plurality of light-emitting units to emit light based on the light-emitting instruction; the first master control module is further used for obtaining a vibration instruction and controlling the vibration module to vibrate based on the vibration instruction. The embodiment of the application is beneficial to expanding the functions of the TWS earphone.

Description

True wireless stereo headset, audio processing, lighting and vibrating method
Technical Field
The application relates to the technical field of intelligent earphones, in particular to a real wireless stereo earphone, an audio processing method, a light emitting method and a vibration method.
Background
Currently, True Wireless Stereo (TWS) headphones are increasingly used due to their compact size and portability. However, due to the small size of the TWS headset, there are few hardware modules for audio processing functions in the TWS headset, and at present, a first main control module is mainly disposed in the TWS headset to forward audio data (for example, forward the audio data to a speaker to play audio) and to perform some simple audio processing (for example, audio codec).
Therefore, the existing earphone has a single function, and cannot interact with a user, and how to expand the function of the earphone is a technical problem to be solved urgently at present.
Disclosure of Invention
The embodiment of the application provides a real wireless stereo headset, an audio processing method, a lighting method and a vibration method.
In a first aspect, an embodiment of the present application provides a real wireless stereo headset, where the headset includes a first main control module, the first main control module includes a bluetooth module, and the headset further includes:
the device comprises a first wireless communication module, a second main control module, an audio interface, a plurality of light-emitting units and a vibration module;
the second main control module is configured to process first audio data to obtain a first audio processing result, where the first audio data is received by the first main control module from an external device through the bluetooth module or the first wireless communication module and is sent to the second main control module, and/or is obtained from the external device through the second wireless communication module;
the second main control module is further configured to process second audio data to obtain a second audio processing result, where the second audio data is received by the second main control module through the audio interface;
the first main control module is used for acquiring a light-emitting instruction and controlling the plurality of light-emitting units to emit light based on the light-emitting instruction;
the first main control module is further used for acquiring a vibration instruction and controlling the vibration module to vibrate based on the vibration instruction.
In a second aspect, the present application provides an audio processing, lighting and vibrating method, wherein the method is applied to a real wireless stereo headset; the earphone includes first host system, first host system includes bluetooth module, the earphone still includes: the device comprises a first wireless communication module, a second main control module, an audio interface, a plurality of light-emitting units and a vibration module;
the second main control module processes first audio data to obtain a first audio processing result, wherein the first audio data is received by the first main control module from an external device through the bluetooth module or the first wireless communication module and is sent to the second main control module, and/or is obtained from the external device through the second wireless communication module;
the second main control module processes second audio data to obtain a second audio processing result, wherein the second audio data is received by the second main control module through the audio interface;
the first main control module acquires a light-emitting instruction and controls the light-emitting units to emit light based on the light-emitting instruction;
the first main control module acquires a vibration instruction and controls the vibration module to vibrate based on the vibration instruction.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor coupled to a memory, the memory configured to store a computer program, the processor configured to execute the computer program stored in the memory to cause the electronic device to perform the method of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, which stores a computer program, where the computer program makes a computer execute the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product comprising a non-transitory computer-readable storage medium storing a computer program, the computer being operable to cause a computer to perform the method according to the first aspect.
The embodiment of the application has the following beneficial effects:
it can be seen that, in the embodiment of the present application, a first wireless communication module, a second main control module, an audio interface, a plurality of light emitting units, and a vibration module are additionally added in the TWS headset. After the modules are newly integrated, the first audio data sent by the external device can be acquired through one or two of the two wireless communication modules, so that the communication mode between the TWS earphone and the external device is widened. And secondly, after the first audio data is acquired, the second main control module performs audio processing to obtain an audio processing result with better quality, so that the audio processing function of the TWS earphone is expanded. The second main control module can also receive second audio data from the external audio equipment through the audio interface and process the second audio data, so that the TWS is expanded into a wired headset. In addition, the first main control module can also obtain a light-emitting instruction and control the plurality of light-emitting units to emit light based on the light-emitting instruction, so that a light effect matched with or liked by a user is obtained, and the purpose of information interaction with the user through the plurality of light-emitting units is achieved. Finally, the vibration module can be used for carrying out vibration interaction with a user. Therefore, after the plurality of modules are integrated, the functions of the TWS earphone are greatly expanded, information interaction with a user is realized based on the plurality of light-emitting units and the vibration module, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of a TWS headset;
fig. 2 is a schematic flowchart of a TWS headset according to an embodiment of the present application;
fig. 3 is a schematic diagram illustrating a lighting unit controlled by an application according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a shock command generation according to an embodiment of the present disclosure;
fig. 5A is a schematic diagram of a multiplexing antenna module according to an embodiment of the present application;
fig. 5B is a schematic diagram illustrating an integrated arrangement of a first audio processing module and a second audio processing module according to an embodiment of the present application;
fig. 5C is a schematic diagram illustrating another integrated arrangement of a first audio processing module and a second audio processing module according to an embodiment of the present application;
fig. 6 is a schematic flowchart of an audio playing, lighting and vibrating method according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, result, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein may be combined with other embodiments.
To facilitate an understanding of the present application, the related art to which the present application relates will first be explained and illustrated.
The structure and functional modules of the conventional TWS headset will be described with reference to fig. 1.
The TWS headset 10 includes a first main control module 100, a first antenna module 102, a pressure sensing module, a gyroscope module, a microphone array, a light sensing module, a battery protection module, a battery, a charging and communication interface of a charging box, a charging reset module, and a speaker 108.
The first main control module 100 includes a plurality of communication interfaces, a first audio processing module 104, a charging management module, an audio input module, and a bluetooth module 103.
The first main control module 100 is used for performing information interaction with modules such as the pressure sensing module, the gyroscope module, the light sensing module and the like through a communication interface to realize interaction control. For example, the first main control module 100 detects a change in a pressure value on the TWS headset through the pressure sensing module, and performs interactive control such as playing, pausing, and song cutting of audio; for another example, the first main control module 100 detects whether the earphone is in the ear or not through the light sensing module; peripheral devices such as a charging interface, a battery protection module and a battery are used for supplying power to the earphone and charging the earphone;
the Bluetooth module 103 and the first antenna module 102 are matched with each other to perform information interaction with external equipment;
the microphone array is used for acquiring and inputting audio data and forwarding the audio data to the first main control module, so that functions of conversation, noise reduction and the like are realized;
the speaker 108 is configured to output the audio data sent by the first audio processing module, so as to implement functions such as audio playing.
It can be seen that, the first audio processing module 104 arranged in the first main control module only implements forwarding and simple processing (for example, noise reduction processing of audio) of some audio data, and cannot perform complex processing on the audio data, which results in a relatively low intelligent degree of the TWS headset; also, the TWS headset 10 is unable to interact with the user. Therefore, the TWS headset currently has limited functions, poor expansibility and poor user experience.
First, any one of the light emitting units according to the present application may be, but not limited to, at least one of a light emitting diode (light emitting unit), a Micro light emitting diode (Micro light emitting unit), a sub-millimeter light emitting diode (Mini light emitting unit), an organic light emitting diode (O light emitting unit), and the like. In the present application, the light emitting unit is mainly taken as an example of the light emitting diode, and the light emitting unit and the light emitting diode are not required to be distinguished.
The color of the light emitted from the light emitting unit may be, but is not limited to, red, green, blue, white, etc. That is, the light emitting unit may be, but not limited to, at least one of a red light emitting unit, a green light emitting unit, a blue light emitting unit, and the like. Therefore, the color and the pattern of the light-emitting pattern can be designed through the color and the arrangement mode of the emergent light of the light-emitting unit.
In one embodiment, the light emitting units include a red light emitting unit (R light emitting unit), a green light emitting unit (G light emitting unit), and a blue light emitting unit (B light emitting unit), and the cover plate assembly on the earphone can be controlled to display light emitting patterns of different patterns, colors, and changing states by controlling arrangement positions of the red light emitting unit (R light emitting unit), the green light emitting unit (G light emitting unit), and the blue light emitting unit (B light emitting unit) and light emitting laws of the red light emitting unit (R light emitting unit), the green light emitting unit (G light emitting unit), and the blue light emitting unit (B light emitting unit).
Referring to fig. 2, fig. 2 is a structural diagram of a TWS headset for real wireless stereo according to an embodiment of the present application. The TWS headset 10 includes a first main control module 100, a first wireless communication module 101, a second main control module 110, a second wireless communication module 111, a plurality of light emitting units 105, a vibration module 107, and an audio interface 113. As shown in fig. 2, the first wireless communication module 101 is connected to the first main control module 100, the second wireless communication module 111 is connected to the second main control module 110, and the second main control module 110 is connected to the first main control module 100. And the second main control module 110 is further connected with a speaker 108 and an audio interface 113 of the TWS headset respectively.
It should be understood that, in order to ensure that the LEDs display normally, the TWS headset further includes a light emitting driver 106, i.e., an LED driver), which is connected with the first main control module 100 and the plurality of light emitting units 105, respectively.
Optionally, the first master control module 100 further includes a bluetooth module 103 for supporting bluetooth communication. Optionally, the first wireless communication module 101 and the second wireless communication module 111 both support a private protocol, and the communication delay is lower than that of the bluetooth module 103. It should be understood that, in order to ensure that the first wireless communication module 101 and the second wireless communication module 111 can support the proprietary protocol, the first antenna module 102 is connected to the first wireless communication module 101, and the second antenna module 112 is connected to the second wireless communication module.
Optionally, the first main control module 100 includes a first audio processing module 104 for supporting the audio processing function of the first main control module 100, and the second main control module 110 includes a second audio processing module 114 for supporting the audio processing function of the second main control module. Also, the second audio processing module 114 has a higher audio processing capability than the first audio processing module 104. Illustratively, the first audio processing module 104 is only used to perform some simple audio processing operations, such as the conversion of a digital audio signal into an analog audio signal, and so on. The second audio processing module 114 is used to perform complex audio processing functions, such as audio codec.
The functional role of each module is described below.
The second main control module 110 is configured to process the first audio data, that is, the second main control module 110 processes the first audio data through the second audio processing module 114 to obtain a first audio processing result; the first audio data is received by the first main control module 100 from an external device through the bluetooth module 103 or the first wireless communication module 101, and is sent to the second main control module 110, and/or is obtained from the external device through the second wireless communication module 111; further, the second main control module 110 sends the first audio processing result to the speaker 108 for playing;
the second main control module 110 is further configured to process second audio data, that is, process the second audio data through the second audio processing module to obtain a second audio processing result, where the second audio data is received by the second main control module through the audio interface, that is, the external audio device passes through the analog audio interface 113; further, the second main control module 110 sends the second audio processing result to the speaker 108 for playing;
in one embodiment of the present application, the second audio data may be an analog audio signal or a digital audio signal, for example, a digital signal transmitted by a computer device. In this way, the second audio processing module 114 can process the second audio data, convert the second audio data into an analog audio signal, and play the analog audio signal through a speaker, thereby converting the TWS headset into a wired headset.
The first main control module 100 is configured to obtain a light emitting instruction, and control the plurality of light emitting units 105 to emit light based on the light emitting instruction;
the first main control module 100 is further configured to obtain a vibration instruction, and control the vibration module 107 to vibrate based on the vibration instruction.
It can be seen that in the embodiment of the present application, the first wireless communication module 101, the second wireless communication module 111, the second main control module 110, the audio interface 113, the plurality of light emitting units 105 and the vibration module 107 are additionally arranged in the TWS headset. After the modules are newly integrated, the first audio data sent by the external device can be acquired through one or two of the two wireless communication modules, so that the communication mode between the TWS earphone and the external device is widened. Secondly, after the first audio data is acquired, the second main control module 110 performs audio processing to obtain an audio processing result with better quality, thereby expanding the audio processing function of the TWS headset. The second main control module 110 may also receive second audio data from an external audio device through the audio interface 113 and process the second audio data, thereby extending the TWS to a wired headset. In addition, the first main control module 100 may further obtain a light emitting instruction, and control the plurality of light emitting units 105 to emit light based on the light emitting instruction, so as to obtain a light effect matched with or preferred by the user, thereby achieving an information interaction with the user through the plurality of light emitting units 105. Finally, a vibration interaction with the user may also be performed through the vibration module 107. Therefore, the function of the TWS headset is greatly expanded after the plurality of modules are integrated, and the information interaction with the user is realized based on the plurality of light-emitting units 105 and the vibration module 107, thereby improving the user experience.
In one embodiment of the present application, when the light emitting units are controlled to emit light based on the light emitting instruction, the first main control module 100 analyzes the light emitting instruction to determine the display color and the display brightness of each of the plurality of light emitting units 105. Note that when it is determined that a certain light-emitting unit does not need to be displayed, the display color of the light-emitting unit is determined to be NULL (NULL) and the display luminance is determined to be 0. Then, based on the display color and the display brightness of each light-emitting unit, each light-emitting unit is driven by the light-emitting driver 106 to display according to the respective display color and the display brightness, so as to present a preset light effect corresponding to the light-emitting instruction, wherein the preset light effect includes one of streamer, breathing and flicker. Of course, in practical applications, the preset light effect may also be various customized light effects of the user, such as a ticker effect, a karaoke effect, and the like. The application does not limit the light effect.
In an embodiment of the present application, the light emitting command is sent to the first main control module 100 by an external device. Optionally, after the external device generates the light emitting command, the external device may directly send the light emitting command to the first main control module 100 through the first wireless communication module 101.
In another embodiment of the present application, the external device may send the light emitting command to the second main control module 110 through the second wireless communication module 111, and the light emitting command is forwarded to the first main control module 100 by the second main control module 110.
Specifically, the external device may select one or more of the first wireless communication module 101 and the second wireless communication module 111 according to a requirement, and send the light emitting instruction to the first main control module 100. It should be noted that the present application does not limit the manner in which the external device selects the wireless communication module.
Illustratively, the correspondence between the audio mode and the light emission instruction is set in advance. Therefore, the external device can generate a corresponding light emitting instruction according to the current working mode of the TWS headset and the corresponding relation. For example, when the external device detects that the operation mode of the TWS headset is in the talk mode, a first light emitting instruction is generated, and the plurality of light emitting units 105 are controlled to present the light effect of running water by the first light emitting instruction. As another example, when the external device detects that the operation mode of the TWS headset is in the game mode, a second light-emitting instruction is generated, by which the plurality of light-emitting units 105 are controlled to present a flickering light effect. It should be noted that as to which light emission instruction each operation mode corresponds may be autonomously set by the user.
Illustratively, the external device further has an application program installed thereon, and the user may generate the light emitting instruction by clicking a virtual function button on the application program. As shown in fig. 3, the user may generate different lighting instructions by clicking different virtual function buttons. As shown in fig. 3, when the user clicks the virtual function button corresponding to the blinking, the external device generates a light emitting command corresponding to the blinking and transmits the light emitting command to the TWS headset to control the plurality of light emitting units 105 to blink.
In another embodiment of the present application, the lighting instructions may also be generated at the TWS headset side.
Illustratively, the first master control module 100 detects a pressing operation of the TWS headset by a user, and generates a light emitting instruction based on the pressing operation. For example, the first master module 100 may detect a user's pressing operation on the TWS headset through a pressure-sensitive module in the TWS headset. For example, the first master module 100 may generate a corresponding light emitting instruction based on the pressing force, pressing position, and pressing frequency of the user on the TWS headset.
For example, when the user presses the bottom end of the TWS headset and presses 3 times in a short time, and the pressing degree is 5N, a third light-emitting instruction is generated, and the third light-emitting instruction is used to instruct the plurality of light-emitting units 105 to be controlled to display a blinking effect.
Table 1 shows a correspondence relationship among a degree of pressing, a pressing position, a frequency of pressing, and a light emission instruction.
Figure BDA0003731204040000071
Figure BDA0003731204040000081
For example, the first main control module may further detect a touch operation of the TWS headset by the user, and generate a light emitting instruction based on the touch operation. For example, the first main control module 100 may detect a user's touch operation on the TWS headset based on a haptic module in the TWS. Similar to the pressing operation, the first main control module 100 may generate a corresponding light emitting instruction based on the touch strength, the touch position, and the touch frequency of the user to the TWS headset, and will not be described in detail.
For example, the first main control module 100 may further perform audio analysis on the first audio data to obtain audio characteristics, and generate the lighting instruction based on the audio characteristics. For example, the first main control module 100 may send the first audio data to the second main control module 110, and the second audio processing module 114 may start an audio analysis function to obtain the audio feature. Illustratively, the audio characteristic may be an amplitude, a loudness, or the like of the first audio data. Alternatively, the adapted lighting instruction may be generated based on the amplitude of the first audio data at the respective time instant.
Since the second main control module 110 with a strong audio processing capability is provided in the present application, in another embodiment of the present application, the light emitting instruction may be generated in a voice manner. Illustratively, the first main control module 100 collects user voices through the microphone array and generates the light emitting instructions based on the user voices. Specifically, the first main control module 100 sends the user voice to the second main control module 110, and the second main control module 110 performs semantic analysis and semantic understanding on the user voice to obtain a voice processing result; then, the voice processing result is transmitted to the first main control module 100, and the first main control module 100 may generate a light emitting instruction based on the voice processing result. Of course, after obtaining the voice processing result, the second main control module 110 may directly generate the light emitting instruction and send the light emitting instruction to the first main control module 100.
For example, when the user wants to control the light-emitting unit by voice, the user can press the microphone and enter corresponding control voice to the microphone. For example, "make the light emitting unit blink" is recorded. Thus, after the voice of 'making the light-emitting unit flash' is collected, the light-emitting instruction for making the light-emitting unit flash can be generated.
The above five ways of generating the light emission command are merely examples, and in practical applications, the light emission command may be generated by other ways.
In one embodiment of the present application, the vibration module 107 includes a speaker, a vibration motor, or a bone conduction vibrator. It should be noted that if the vibration module 107 is a speaker, the vibration module is essentially the speaker 108 of the earphone.
Alternatively, as shown in fig. 4, the user may press the TWS headset to generate a shock instruction. Specifically, the first main control module 100 may detect a pressing operation of the TWS headset by the user through the pressure sensing module, generate a vibration instruction based on the pressing operation, and control the vibration module 107 to vibrate based on the vibration instruction. Similar to the generation of the light emission command, the vibration command may be generated based on the pressing position, the pressing force degree, and the pressing frequency, and will not be described in detail.
Alternatively, as shown in fig. 4, the user may touch the TWS headset to generate a shock instruction. Specifically, the first master control module 100 may detect a touch operation of the user on the TWS through the touch module, generate a vibration instruction based on the touch operation, and control the vibration module 107 to vibrate based on the vibration instruction. Similar to the above-mentioned generation of the light-emitting command, the vibration command may be generated based on the touch position, the touch strength, and the touch frequency, and will not be described in detail.
For example, when the vibration module 107 is a speaker, the first main control module 100 may generate a corresponding audio analog signal based on the vibration instruction, input the audio analog signal to the speaker, and then control the speaker to vibrate based on the audio analog signal, so that the TWS headset may interact with the user through vibration, thereby improving the user experience.
For example, when the vibration module 107 is a vibration motor, the first main control module 100 may generate a digital electrical signal required by the vibration motor based on the vibration instruction, input the digital electrical signal to the vibration motor, and then control the vibration motor to vibrate based on the digital electrical signal, so that the TWS headset may interact with the user through vibration, thereby improving the user experience.
For example, when the vibration module 107 is a bone conduction vibrator, the first main control module 100 may generate a sensing signal required by the bone conduction vibrator based on a vibration instruction, input the sensing signal to the bone conduction vibrator, and then control the bone conduction vibrator to vibrate based on the sensing signal, so that the TWS headset may interact with the user through vibration, thereby improving the user experience.
In one embodiment of the present application, the external device transmits the first audio data to the first main control module 100 through the first wireless communication module 101. Then, when the first main control module 100 determines that the first audio data needs to be subjected to finer audio processing and audio processing, the first audio data is forwarded to the second main control module 110, so that the second main control module 110 can acquire the first audio data sent by the external device.
In another embodiment of the present application, the external device directly transmits the first audio data to the second main control module 110 through the second wireless communication module 111.
Illustratively, when the communication delay is smaller than the delay threshold, the external device sends the first audio data to the second main control module 110 through the second wireless communication module 111; when the communication delay is greater than or equal to the delay threshold, the external device may send the first audio data to the first main control module 100 through the first wireless communication module 101, and the first main control module 100 forwards the first audio data to the second main control module 110.
It should be noted that the above only describes the manner in which the external device selects one wireless communication module from the first wireless communication module 101 and the second wireless communication module 111 to transmit the first audio data to the TWS headset based on the communication delay, and in practical applications, there is a wireless communication module that is selected and matched based on other manners. For example, when there is no free resource in the first wireless communication module 101, the external device may select the second wireless communication module 111 as the matching wireless communication module; for another example, when the first wireless communication module is currently in the interference state, the external device may select the second wireless communication module as the matched wireless communication module. Or, before receiving the first audio data, the TWS headset sends an indication message to the external device, where the indication message is used to indicate a wireless communication module allowed by the TWS headset, that is, the external device can send the first audio data to the TWS headset through the allowed wireless communication module, where the allowed wireless communication module is one of the first wireless communication module and the second wireless communication module. Therefore, the method for selecting the matched wireless communication module is not limited in the present application.
In one embodiment of the application, the data volume of the first audio data transmitted to the earphone by the external device is larger and larger as the user pursues the sound quality. Because the TWS earphone of this application is provided with two wireless communication modules, consequently when the first audio data that the external equipment sent satisfies the split demand, can carry out the split to first audio data and send. Optionally, when the data amount of the first audio data is large, for example, when the data amount is larger than a threshold, the first audio data needs to be split, and the external device may send the first audio data to the TWS headset through the first wireless communication module and the second wireless communication module at the same time. Alternatively, when the external device needs to separately transmit the left and right sound sources of the headset, for example, when the user plays music with high code rate, the external device may transmit the first audio data to the TWS headset through the first wireless communication module and the second wireless communication module at the same time, that is, transmit the sound source of the left ear through one wireless communication module, and transmit the sound source of the right ear through the other wireless communication module.
Optionally, when the external device splits and sends the first audio data, a part of the first audio data may be sent to the first main control module 100 through the first wireless communication module 101, and the first main control module 100 may forward the part of the audio data to the second main control module 110 and send another part of the first audio data to the second main control module through the second wireless communication module 111. Further, after obtaining the first audio data and the other first sub-audio data, the second main control module 110 may integrate the two first audio data to obtain the first audio data.
For example, when the external device splits the first audio data into multiple pieces of audio data to send, priority indication information may be carried in each piece of audio data, where the priority indication information is used to indicate a merging order of the multiple pieces of audio data, so that the second main control module 110 may integrate the obtained multiple pieces of audio data according to the indication information carried in each piece of audio data to obtain the complete first audio data.
It can be seen that, in the embodiment of the present application, when the amount of the first audio data is large, the first audio data may be split into multiple parts, and the multiple parts are respectively sent to the TWS headset through the two wireless communication modules, so that parallel sending of the first audio data may be implemented, and thus, under the condition of not increasing communication delay, real-time sending and playing of high-quality first audio data may also be implemented, and smooth proceeding of high-quality and low-delay is ensured.
In an embodiment of the present application, after the first main control module 100 acquires the first audio data, the first audio data may be sent to the second main control module 110, and after the second main control module 110 acquires the first audio data and the second audio data, the first audio data and the second audio data may be mixed, and a result of the mixing may be played through a speaker of the earphone; or the second main control module 110 may send the second audio data to the first main control module 100, and after the first audio data and the second audio data are acquired, the first main control module 100 may mix the first audio data and the second audio data, and play a result after the mixing through a speaker of the earphone. The external device sending the first audio data to the headset and the external device sending the second audio data may be the same device or different devices. For example, in the case of the same device, when the external device splits the audio data, a part of the split data (which may be understood as first audio data) may be sent to the second main control module 110 through a wireless communication module (for example, the first wireless communication module or the second wireless communication module) and another part of the split data (which may be understood as second audio data) may be sent to the second main control module 110 through the audio interface 113. After obtaining the first audio data and the second audio data, the second main control module 110 may mix the first audio data and the second audio data to obtain complete audio data, and send the complete audio data to a speaker of the headset for playing. For another example, in the case of different devices, one external device may transmit first audio data (e.g., music sound data) to the headset through the wireless communication module (e.g., the first wireless communication module or the second wireless communication module described above), and another external device may transmit second audio data (e.g., game sound data) to the headset through the audio interface. After the second main control module 110 mixes the first audio data and the second audio data, the first audio data and the second audio data may be played through a speaker of the headset.
The audio interface 113 may be an MMCX interface.
It should be noted that the present application only illustrates the case where one external device sends the first audio data to the headset, since the TWS headset of the present application is provided with two wireless communication modules. Thus, the TWS headset described above can interact with two external devices simultaneously. Suitably, one of the two external devices may send audio data to the TWS headset via the first wireless communication module, and the other external device may send audio data to the TWS headset via the second wireless communication module. The earphone can simultaneously perform audio interaction through the two wireless communication modules, for example, playing of game audio based on one wireless communication module and playing of conversation audio based on the other wireless communication module are realized.
In an embodiment of the application, when the external device simultaneously sends the first audio data and the control instruction (for example, the light emitting instruction or the vibration instruction) to the TWS headset, for example, when the external device is playing music through the TWS headset, a user performs a touch operation on an application program of the external device to generate a light emitting instruction, and then the external device needs to send the light emitting instruction to the TWS headset. Since the communication delay of the second wireless communication module is lower than that of the first wireless communication module, the external device may send the first audio data to the first main control module 100 or the second main control module 110 through the second wireless communication module, so that the audio processing module may obtain the audio data sent by the external device. Correspondingly, the external device may send a light emitting instruction to the first main control module 100 through the first wireless communication module, and then the first main control module 100 may send the light emitting instruction to the light emitting driver 106 to control the light emitting unit to display the light effect. In general, when the external device needs to send a control command and first audio data to the TWS headset at the same time, the control command is sent through the first wireless communication module, and the first audio data is sent through the second wireless communication module. The parallel sending of the control instruction and the first audio data is realized, the transmission of the audio data is not influenced while the TWS earphone is controlled, and the playing of the audio is not influenced.
Of course, in practical applications, the light emitting command and the first audio data may be simultaneously transmitted to the first main control module 100 or the second main control module 110 through any one of the wireless communication modules.
In this embodiment, the second main control module 110 may process and process the first audio data according to actual requirements, so as to obtain an audio processing result. Specifically, the second main control module 110 may be a module including a DSP processor and an audio processor, and the audio processing capability of the second main control module 110 is stronger compared to the audio processing capability of the first and second main control modules of the first main control module 100. Specifically, the second audio processing module may perform encoding and decoding, asynchronous sample rate conversion, audio front-end storage, and the like on the first audio data; the DSP may perform memory management, system interrupts, instruction buffering, data buffering, instruction RAM, data RAM, etc. on the first audio data. Therefore, the audio processor and the DSP are mutually matched, various pre-processing, post-processing and other works on the first audio data can be completely performed, and the audio processing requirements of various scene modes and various complex functions are met.
More specifically, when the first audio data is call voice (e.g., call voice, video voice) to be played, the second main control module 110 may perform noise reduction on the first audio data to obtain better communication voice; when the first audio data is music audio (e.g., music or video audio or game audio), the second main control module 110 may encode and decode the first audio data to obtain a better audio effect, so as to improve the audio experience of the user. In summary, the second main control module 110 can perform adaptive audio processing and processing on the first audio data according to different application scenarios and contextual models, so as to improve the intelligence and user experience of the headset.
Further, the second main control module 110 processes the first audio data, and generates an audio processing result. Optionally, the second main control module 110 may send the audio processing result to the first main control module 100, so that the first main control module 100 sends the audio processing result to a speaker in the TWS headset, and the audio processing result is played through the speaker. Optionally, the second main control module 110 may also directly send the audio processing result to a speaker, and play the audio processing result through the speaker.
In an embodiment of the application, the first wireless communication module and the second wireless communication module may support data transmission in a 2.4GHZ band and data transmission in other bands, and are mainly used for implementing private protocol communication between the TWS headset and an external device. Alternatively, with the development of communication technology, the wireless communication module may support more frequency band communication modes, for example, 4g or 5g communication, and even more advanced communication modes in the future.
Wherein, the first antenna module 102 and the second antenna module 112 are both arc-shaped, and the first antenna module 102 and the second antenna module 112 both include: one or more of a Laser Direct Structuring (LDS) antenna, a ceramic antenna, and a board-mounted antenna.
Illustratively, as shown in fig. 2, the second main control module 110 and the first main control module 100 are configured as two independent modules, and the first wireless communication module 101 and the second wireless communication module 111 are suitably different modules. In a case where the second main control module 110 and the first main control module 100 are provided as two independent modules, in one possible embodiment, the first antenna module 102 and the second antenna module 112 are different antenna modules, that is, the first wireless communication module 101 and the second wireless communication module 111 interact with external devices through different antennas. In another possible embodiment, as shown in fig. 5A, the first antenna module 102 and the second antenna module 112 are the same antenna module, that is, the first wireless communication module 101 and the second wireless communication module 111 multiplex the same antenna to interact with an external device, and the antenna multiplexed with the antenna is referred to as an antenna module 120 in this application.
For example, as shown in fig. 5B, the second main control module 110 may be integrated with the first main control module 100, that is, the processing functions of the first audio processing module 104 and the second audio processing module 114 are integrated into one audio processing module, which is referred to as an audio processing module 130 in this application. After the second audio processing module 114 and the first audio processing module 104 are integrated into the audio processing module 130, the audio processing module 130 can obtain audio data from an external device through the first wireless communication module 101 and/or the second wireless communication module 111. For example, when audio data needs to be processed using the processing function of the first audio processing module 104, the audio data may be received from an external device through the first wireless communication module 101; when it is necessary to process audio data using the processing function of the second audio processing module 114, the audio data may be received from an external device through the second wireless communication module 111.
In an embodiment of the present application, with the integration of the audio processing modules, the functions processed by the second audio processing module 114 and the first audio processing module 104 can be implemented on one audio processing module, as shown in fig. 5C, at this time, only one audio processing module needs to be integrated into the first main control module, and this audio processing module is referred to as an audio processing module 140. Since only one audio processing module 140 is needed at this time, a wireless communication module can be connected to the audio processing module 140. For example, a separate second wireless communication module is not provided, in other words, in this integrated case, the first wireless communication module and the second wireless communication module are the same wireless communication module, which is denoted as the first wireless communication module 101 in this application, and similarly, a separate second antenna module 112 is not required.
In one embodiment of the present application, the external device may be a user device (e.g., a cell phone, tablet, computer, wearable device, etc.). Other relay devices with data forwarding function, such as a headset charging chamber, etc., are also possible. The present application does not limit the type of external device.
It should be noted that, when the external device is a relay device, the external device needs to maintain communication connection with both the TWS headset and the user equipment, for example, maintain communication connection with the TWS headset in a wireless mode, and maintain communication connection with the user equipment through a USB. The audio data transmitted by the external device is transmitted by the user equipment to the external device. Of course, the external device may also receive audio data from the TWS headset and forward the audio data to the user equipment.
The above describes the process of sending audio data from the external device to the headset. The following describes a procedure of the TWS headset transmitting audio data to an external device, in conjunction with the structure of the TWS headset shown in fig. 2. For example, in a call scenario, the TWS headset sends audio data to an external device.
Illustratively, a microphone array collects user speech; and transmits the user voice to the first main control module 100.
Alternatively, the first main control module 100 may directly transmit the user voice to the external device through the first wireless communication module 101. Optionally, when the first main control module 100 determines that the user voice needs to be refined, the user voice may be sent to the second main control module 110 for audio processing (i.e., the second audio module 114 performs audio processing on the user voice), so as to obtain a user voice processing result. Optionally, after obtaining the user voice processing result, the second main control module 110 sends the user voice processing result to the first main control module 100, and then the first main control module 100 sends the user voice processing result to the external device; alternatively, the second main control module 110 may directly send the user voice processing result to the external device through the second wireless communication module 111.
Referring to fig. 6, fig. 6 is a schematic flowchart of an audio playing, lighting and vibrating method according to an embodiment of the present disclosure. The method is applied to the TWS headset described above. The method comprises the following steps:
601: the second main control module processes first audio data to obtain a first audio processing result, wherein the first audio data is received by the first main control module from an external device through the Bluetooth module or the first wireless communication module and is sent to the second main control module, and/or is obtained from the external device through the second wireless communication module.
602: and the second master control module processes second audio data to obtain a second audio processing result, wherein the second audio data is received by the second master control module through the audio interface.
603: the first main control module obtains a light-emitting instruction and controls the light-emitting units to emit light based on the light-emitting instruction.
604: the first main control module acquires a vibration instruction and controls the vibration module to vibrate based on the vibration instruction.
It should be noted that, for specific implementation functions of the above modules, reference may be made to specific functions of the above TWS headset embodiment, and a repeated description is not made herein.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. As shown in fig. 7, the electronic device 700 includes a first transceiver 701, a second transceiver 702, a first processor 703, a second processor 704, a memory 705, a speaker 706, an audio interface 707, a light-emitting driver 708 and a plurality of light-emitting units 709, a vibration module 710, and a bluetooth module 711. Connected to each other by a bus 712. The memory 705 is used to store computer programs and data, and may transmit the data stored by the memory 705 to the first processor 703 and the second processor 704.
The first processor 703 and the second processor 704 are configured to read the computer program in the memory 705, and perform the following operations:
the second processor 704 processes the first audio data to obtain a first audio processing result; wherein the first audio data is received by the first processor 703 from an external device through the bluetooth module 711 or the first transceiver 701 and is sent to the second processor 704, and/or is obtained from the external device through the second transceiver 702;
the second processor 704 processes second audio data to obtain a second audio processing result, where the second audio data is received by the second processor 704 through the audio interface;
the first processor 703 acquires a light emitting instruction, and controls the plurality of light emitting units 709 to emit light based on the light emitting instruction;
the first processor 703 acquires a vibration instruction, and controls the vibration module 710 to vibrate based on the vibration instruction
In some possible embodiments, in terms of the first processor 703 acquiring the light emitting instruction, the first processor 703 is specifically configured to perform the following steps:
acquiring the light-emitting instruction from the external device, wherein the light-emitting instruction control instruction is generated by the external device based on a touch operation of a user on an application program on the external device;
or acquiring the pressing operation of the user on the earphone, and generating the light-emitting instruction based on the pressing operation;
or acquiring touch operation of a user on the earphone, and generating the light-emitting instruction based on the touch operation;
or carrying out audio analysis on the audio data to obtain audio characteristics, and generating the light-emitting instruction based on the audio characteristics;
or collecting user voice through a microphone array of the earphone, and generating the light-emitting instruction based on the user voice.
In some possible embodiments, in terms that the first processor 703 controls the plurality of light-emitting units to emit light based on the light-emitting instruction, the first processor 703 is specifically configured to perform the following steps:
determining a display color and a display brightness of each of the plurality of light-emitting units based on the light-emitting instruction;
and controlling each light-emitting unit to emit light according to the respective display color and the display brightness.
In some possible embodiments, in terms of the first processor 703 acquiring the shock instruction, the first processor 703 is specifically configured to perform the following steps:
obtaining the pressing operation of a user on the earphone, and generating the vibration instruction based on the pressing operation;
alternatively, the first and second electrodes may be,
and acquiring touch operation of a user on the earphone, and generating the vibration instruction based on the touch operation.
In some possible embodiments, the vibration module comprises the horn, a vibration motor, or a bone conduction vibrator.
In some possible embodiments, when the audio data meets the splitting requirement, the first master control module is configured to receive a portion of the audio data from the external device through the first wireless communication module;
the second master control module is used for receiving another part of the audio data from the external equipment through the second wireless communication module;
the second main control module is further configured to receive a portion of the audio data from the first main control module, and integrate a portion of the audio data and another portion of the audio data into the audio data.
In some possible embodiments, the second master control module and the first master control module are provided as two independent modules, and the first wireless communication module and the second wireless communication module are different modules;
or, the second main control module and the first main control module are integrated into a whole, the first wireless communication module and the second wireless communication module are the same module, and the first wireless communication module and the second wireless communication module share the same antenna or use different antennas.
Embodiments of the present application also provide a computer-readable storage medium storing a computer program, which is executed by a processor to implement part or all of the steps of any one of the audio processing, lighting and vibrating methods as set forth in the above method embodiments.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the audio processing, lighting and vibration methods as set forth in the above method embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may be implemented in the form of a software program module.
The integrated unit, if implemented in the form of a software program module and sold or used as a stand-alone product, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A real wireless stereo headset, the headset comprising a first master control module, the first master control module comprising a Bluetooth module, the headset further comprising:
the device comprises a first wireless communication module, a second main control module, an audio interface, a plurality of light-emitting units and a vibration module;
the second main control module is configured to process first audio data to obtain a first audio processing result, where the first audio data is received by the first main control module from an external device through the bluetooth module or the first wireless communication module and is sent to the second main control module, and/or is obtained from the external device through the second wireless communication module;
the second main control module is further configured to process second audio data to obtain a second audio processing result, where the second audio data is received by the second main control module through the audio interface;
the first main control module is used for acquiring a light-emitting instruction and controlling the plurality of light-emitting units to emit light based on the light-emitting instruction;
the first main control module is further used for acquiring a vibration instruction and controlling the vibration module to vibrate based on the vibration instruction.
2. The headset of claim 1,
in the aspect of the first main control module obtaining the light emitting instruction, the first main control module is specifically configured to:
acquiring the light-emitting instruction from the external device, wherein the light-emitting instruction control instruction is generated by the external device based on touch operation of a user on an application program on the external device;
or acquiring the pressing operation of the user on the earphone, and generating the light-emitting instruction based on the pressing operation;
or acquiring touch operation of a user on the earphone, and generating the light-emitting instruction based on the touch operation;
or carrying out audio analysis on the audio data to obtain audio characteristics, and generating the light-emitting instruction based on the audio characteristics;
or collecting user voice through a microphone array of the earphone, and generating the light-emitting instruction based on the user voice.
3. The headset according to claim 1 or 2,
in an aspect of controlling the plurality of light emitting units to emit light based on the light emitting instruction, the first main control module is specifically configured to:
determining a display color and a display brightness of each of the plurality of light-emitting units based on the light-emitting instruction;
and controlling each light-emitting unit to emit light according to the respective display color and the display brightness.
4. The headset according to any one of claims 1-3,
in the aspect that the first main control module acquires the vibration instruction, the first main control module is specifically configured to:
acquiring the pressing operation of a user on the earphone, and generating the vibration instruction based on the pressing operation;
alternatively, the first and second liquid crystal display panels may be,
and acquiring touch operation of a user on the earphone, and generating the vibration instruction based on the touch operation.
5. The headset according to any one of claims 1-4,
the vibration module comprises the loudspeaker, a vibration motor or a bone conduction vibrator.
6. The headset according to any one of claims 1-5,
when the first audio data meets the splitting requirement, the first main control module is used for receiving a part of the first audio data from the external equipment through the first wireless communication module;
the second master control module is used for receiving another part of the first audio data from the external equipment through the second wireless communication module;
the second main control module is further configured to receive a portion of the first audio data from the first main control module, and integrate a portion of the first audio data and another portion of the first audio data into the first audio data.
7. The headset according to any one of claims 1-6,
the second main control module and the first main control module are arranged into two independent modules, and the first wireless communication module and the second wireless communication module are different modules;
or, the second main control module and the first main control module are integrated into a whole, the first wireless communication module and the second wireless communication module are the same module, and the first wireless communication module and the second wireless communication module share the same antenna or use different antennas.
8. An audio processing, lighting and vibration method, characterized in that the method is applied to a real wireless stereo headset; the earphone includes first host system, first host system includes bluetooth module, the earphone still includes: the device comprises a first wireless communication module, a second main control module, an audio interface, a plurality of light-emitting units and a vibration module;
the second main control module processes first audio data to obtain a first audio processing result, wherein the first audio data is received by the first main control module from an external device through the bluetooth module or the first wireless communication module and is sent to the second main control module, and/or is obtained from the external device through the second wireless communication module;
the second main control module processes second audio data to obtain a second audio processing result, wherein the second audio data is received by the second main control module through the audio interface;
the first main control module acquires a light-emitting instruction and controls the light-emitting units to emit light based on the light-emitting instruction;
the first master control module acquires a vibration instruction and controls the vibration module to vibrate based on the vibration instruction.
9. An electronic device, comprising: a processor coupled to the memory, and a memory for storing a computer program, the processor for executing the computer program stored in the memory to cause the electronic device to perform the method of claim 8.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which is executed by a processor to implement the method of claim 8.
CN202210787328.7A 2022-07-05 2022-07-05 Real wireless stereo earphone, audio processing, lighting and vibration method Active CN115038021B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210787328.7A CN115038021B (en) 2022-07-05 2022-07-05 Real wireless stereo earphone, audio processing, lighting and vibration method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210787328.7A CN115038021B (en) 2022-07-05 2022-07-05 Real wireless stereo earphone, audio processing, lighting and vibration method

Publications (2)

Publication Number Publication Date
CN115038021A true CN115038021A (en) 2022-09-09
CN115038021B CN115038021B (en) 2023-04-25

Family

ID=83129235

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210787328.7A Active CN115038021B (en) 2022-07-05 2022-07-05 Real wireless stereo earphone, audio processing, lighting and vibration method

Country Status (1)

Country Link
CN (1) CN115038021B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024077701A1 (en) * 2022-10-12 2024-04-18 两氢一氧(杭州)数字科技有限公司 Light-emitting control method and apparatus for wireless earphone, terminal, and wireless earphone

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050181826A1 (en) * 2004-02-18 2005-08-18 Partner Tech. Corporation Handheld personal digital assistant for communicating with a mobile in music-playing operation
EP1569425A1 (en) * 2004-02-24 2005-08-31 Partner Tech. Corporation Handheld PDA wirelessly connected to mobile phone and capable of playing MP3 music. Music is interrupted if incoming call is received.
CN108124214A (en) * 2013-11-12 2018-06-05 南安市鑫灿品牌运营有限公司 A kind of bluetooth headset and combinations thereof
US20190306623A1 (en) * 2018-03-30 2019-10-03 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method for Input Operation Control and Related Products
CN111681404A (en) * 2020-08-11 2020-09-18 江西斐耳科技有限公司 Communication equipment, earphone and communication equipment main part based on BLE wireless data transmission
CN113411726A (en) * 2020-03-17 2021-09-17 华为技术有限公司 Audio processing method, device and system
CN214413006U (en) * 2021-03-19 2021-10-15 北京中科声空科技有限公司 True wireless stereo earphone

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050181826A1 (en) * 2004-02-18 2005-08-18 Partner Tech. Corporation Handheld personal digital assistant for communicating with a mobile in music-playing operation
EP1569425A1 (en) * 2004-02-24 2005-08-31 Partner Tech. Corporation Handheld PDA wirelessly connected to mobile phone and capable of playing MP3 music. Music is interrupted if incoming call is received.
CN108124214A (en) * 2013-11-12 2018-06-05 南安市鑫灿品牌运营有限公司 A kind of bluetooth headset and combinations thereof
US20190306623A1 (en) * 2018-03-30 2019-10-03 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method for Input Operation Control and Related Products
CN113411726A (en) * 2020-03-17 2021-09-17 华为技术有限公司 Audio processing method, device and system
CN111681404A (en) * 2020-08-11 2020-09-18 江西斐耳科技有限公司 Communication equipment, earphone and communication equipment main part based on BLE wireless data transmission
CN214413006U (en) * 2021-03-19 2021-10-15 北京中科声空科技有限公司 True wireless stereo earphone

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024077701A1 (en) * 2022-10-12 2024-04-18 两氢一氧(杭州)数字科技有限公司 Light-emitting control method and apparatus for wireless earphone, terminal, and wireless earphone

Also Published As

Publication number Publication date
CN115038021B (en) 2023-04-25

Similar Documents

Publication Publication Date Title
CN109445740B (en) Audio playing method and device, electronic equipment and storage medium
CN106531177B (en) Audio processing method, mobile terminal and system
US9733890B2 (en) Streaming audio, DSP, and light controller system
CN111132111B (en) BLE-based audio sharing method, system and computer readable storage medium
CN109379490B (en) Audio playing method and device, electronic equipment and computer readable medium
JP2022081381A (en) Method and device for playing back audio data, electronic equipment and storage medium
US20170195817A1 (en) Simultaneous Binaural Presentation of Multiple Audio Streams
CN115038021B (en) Real wireless stereo earphone, audio processing, lighting and vibration method
US20150086024A1 (en) Apparatus and method for reproducing multi-sound channel contents using dlna in mobile terminal
CN111970613A (en) Wireless transmitting device, control method and display device
US20070060195A1 (en) Communication apparatus for playing sound signals
CN111770403A (en) Wireless earphone control method, wireless earphone and control system thereof
CN104035350A (en) Headphone jack based control system and method and headphone jack device
JP2018530286A (en) DJ device with integrated detachable fader component
CN112887858A (en) Microphone with hardware sound effect and sound effect processing method
KR200397845Y1 (en) Wireless audio signal transferring apparatus
WO2017029896A1 (en) Audio device and audio device control method
CN114501401A (en) Audio transmission method and device, electronic equipment and readable storage medium
CN111556406B (en) Audio processing method, audio processing device and earphone
CN113709528B (en) Play control method, play configuration device, electronic equipment and storage medium
CN208707868U (en) A kind of microphone and microphone system
CN111787446A (en) Electronic equipment, data processing method and device
CN113709906A (en) Wireless audio system, wireless communication method and device
CN213880236U (en) Sound effect conversion device
US20220244908A1 (en) Content playback program, content playback device, content playback method, and content playback system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant