CN111142665B - Stereo processing method and system for earphone assembly and earphone assembly - Google Patents

Stereo processing method and system for earphone assembly and earphone assembly Download PDF

Info

Publication number
CN111142665B
CN111142665B CN201911377379.7A CN201911377379A CN111142665B CN 111142665 B CN111142665 B CN 111142665B CN 201911377379 A CN201911377379 A CN 201911377379A CN 111142665 B CN111142665 B CN 111142665B
Authority
CN
China
Prior art keywords
earphone
sound source
earpiece
filter
filter coefficients
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911377379.7A
Other languages
Chinese (zh)
Other versions
CN111142665A (en
Inventor
童伟峰
张亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bestechnic Shanghai Co Ltd
Original Assignee
Bestechnic Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bestechnic Shanghai Co Ltd filed Critical Bestechnic Shanghai Co Ltd
Priority to CN201911377379.7A priority Critical patent/CN111142665B/en
Publication of CN111142665A publication Critical patent/CN111142665A/en
Application granted granted Critical
Publication of CN111142665B publication Critical patent/CN111142665B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/033Headphones for stereophonic communication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R29/00Monitoring arrangements; Testing arrangements

Abstract

The disclosure relates to a stereo processing method and system of an earphone assembly and the earphone assembly. Wherein the earphone assembly includes a first earphone and a second earphone, the stereo processing method includes: detecting a motion parameter of the first earphone; transmitting the detected motion parameters of the first earphone to the second earphone; and adjusting the audio signals to be played by the first earphone and the second earphone respectively based on the motion parameters, so that the adjusted audio signals simulate sound changes caused by the motion of the corresponding earphone relative to the sound source position. Therefore, each earphone in the earphone assembly can be used for timely adjusting the generated stereophonic sound so as to simulate the stereophonic sound change caused by the change of the relative position of the earphone and the sound source.

Description

Stereo processing method and system for earphone assembly and earphone assembly
Technical Field
The present disclosure relates to headphones and sound effect processing methods thereof, and more particularly, to a stereo processing method and system of a headphone assembly, and a headphone assembly.
Background
With the improvement of social progress and the improvement of living standard of people, the earphone becomes an indispensable living article for people. The conventional wired earphone is connected with a smart device (such as a smart phone, a notebook computer, a tablet computer, etc.) through a wire, which can limit the actions of a wearer, especially in sports occasions. At the same time, the winding and pulling of the earphone cord, as well as the stethoscope effect, all affect the user experience. The common Bluetooth headset cancels the connection between the headset and the intelligent device, but the connection between the left ear and the right ear still exists. True wireless stereo headphones have evolved.
However, in a game scenario, such as a virtual reality game scenario, as the wearer of the headset moves while wearing the headset, such as movement in various directions, the position of the headset relative to the sound source in the game scenario changes. The current true wireless stereo earphone still keeps the previously set stereo effect when the earphone moves; for example, in the case where the stereophonic effect of the gunshot is set to be emitted from the front, the wearer of the headset turns back 180 degrees, and then the orientation of the sound source of the gunshot has been changed from the front to the rear of the wearer of the headset, but the stereophonic effect that it hears still remains the effect emitted from the front of the wearer of the headset, the wearer may feel that the sound source of the gunshot suddenly shifts in position, which may seriously affect the real experience of the wearer.
Disclosure of Invention
The present disclosure is provided to solve the above-mentioned problems occurring in the prior art.
What is needed is a stereo processing method and system for an earphone assembly, and an earphone assembly, which can realize that an earphone can timely adjust stereo generated by the earphone so as to simulate stereo change caused by change of relative positions of the earphone and a sound source, so that the earphone wearer can feel more true of the position and positioning of the sound source when carrying the earphone for movement, and further improve the experience of the earphone wearer in scenes such as games.
According to a first aspect of the present disclosure, there is provided a stereo processing method of an earphone assembly including a first earphone and a second earphone; the stereo processing method comprises the following steps: detecting a motion parameter of the first earphone; transmitting the detected motion parameters of the first earphone to the second earphone; and adjusting the audio signals to be played by the first earphone and the second earphone respectively based on the motion parameters, so that the adjusted audio signals simulate sound changes caused by the motion of the corresponding earphone relative to the sound source position.
According to a second aspect of the present disclosure, there is provided a stereo processing system of an earphone assembly including a first earphone and a second earphone; the stereo processing system includes: the detection module is configured to detect a motion parameter of the first earphone; a transmission module configured to transmit the detected motion parameter of the first earphone to the second earphone; and an adjustment module configured to adjust audio signals to be played by the first earphone and the second earphone, respectively, based on the motion parameters, so that the adjusted audio signals simulate sound variations caused by the motion of the corresponding earphone relative to the sound source position.
According to a third aspect of the present disclosure, there is provided a headset assembly configured to: the earphone comprises at least a first earphone and a second earphone, wherein the first earphone detects the motion parameters of the first earphone by using a detection device arranged on the first earphone and transmits the detected motion parameters of the first earphone to the second earphone; the first earphone and the second earphone respectively adjust audio signals to be played based on the motion parameters, so that the adjusted audio signals simulate sound changes caused by the motion of the corresponding earphone relative to the sound source position.
By utilizing the stereo processing method, the system and the earphone assembly of the earphone assembly according to the embodiments of the present disclosure, each earphone in the earphone assembly timely adjusts the generated stereo to simulate the stereo change of the earphone caused by the change of the relative position of the earphone and the sound source, so that the earphone wearer can feel more true when carrying the earphone motion to the position and the positioning of the sound source, and the experience of the earphone wearer in scenes such as games is improved.
Drawings
In the drawings, which are not necessarily drawn to scale, like numerals may describe similar components in different views. The same reference numerals with letter suffixes or different letter suffixes may represent different instances of similar components. The accompanying drawings illustrate various embodiments by way of example in general and not by way of limitation, and together with the description and claims serve to explain the disclosed embodiments. Such embodiments are illustrative and not intended to be exhaustive or exclusive of the present apparatus or method.
Fig. 1 shows a schematic overview of a communication connection between individual headphones of a headphone assembly and between the headphones and another device according to an embodiment of the present disclosure;
fig. 2 illustrates a flow chart of a stereo processing method of a headset assembly according to an embodiment of the present disclosure;
fig. 2 (a) shows a flowchart of a stereo processing based on motion parameters of a headset assembly according to an embodiment of the present disclosure;
fig. 3 (a) shows a processing schematic of a mono audio signal according to an embodiment of the present disclosure;
fig. 3 (b) shows a processing schematic of a binaural audio signal according to an embodiment of the disclosure;
fig. 4 illustrates a timing diagram of a stereo processing method of a headset assembly according to an embodiment of the present disclosure;
fig. 5 (a) shows a schematic diagram of the structure of a bluetooth physical frame according to an embodiment of the present disclosure;
fig. 5 (b) shows a schematic diagram of a structure of a bluetooth physical frame according to another embodiment of the present disclosure;
fig. 6 shows a schematic diagram of a stereo processing system of a headset assembly according to an embodiment of the present disclosure.
Detailed Description
In order to better understand the technical solutions of the present disclosure, the following detailed description of the present disclosure is provided with reference to the accompanying drawings and the specific embodiments. Embodiments of the present disclosure will be described in further detail below with reference to the drawings and specific embodiments, but not by way of limitation of the present disclosure. The order in which the steps are described herein by way of example should not be construed as limiting if there is no necessity for a relationship between each other, and it should be understood by those skilled in the art that the steps may be sequentially modified without disrupting the logic of each other so that the overall process is not realized.
Fig. 1 shows a schematic diagram of communication connections between individual headphones of a headphone assembly and between the headphones and another device according to an embodiment of the present disclosure. As shown in fig. 1, a communication system 100 established by a headset assembly with another device includes another device 101, a first headset 102, and a second headset 103. Wherein the other device 101 may be a variety of portable smart terminals including, but not limited to, a cell phone, a tablet computer, a wearable smart device, etc. The first earpiece 102 establishes a first communication connection 104 with the other device 101, the first earpiece 102 also establishing a second communication connection 105 with the second earpiece 103. The first earpiece 102 is capable of transmitting the relevant communication parameters to the second earpiece 103, so that the second earpiece 103 listens to the first communication connection, i.e. to the connection 106, using the relevant communication parameters; the relevant communication parameters may be transmitted directly to the second earpiece 103 or may be transmitted to the second earpiece 103 via a relay device, which may be any one or a combination of a charging cartridge, another device 101, a wired circuit, etc. In some embodiments, the relevant communication parameters include, but are not limited to, a communication connection address of the other device 101, encryption parameter information of the communication connection, etc., such that the second earpiece 103 does not need to perform pairing and establishment of the communication connection, but rather can masquerade as the first earpiece 102 to listen for and receive signals sent by the other device 101 via the first communication connection 104. Such communication connections include, but are not limited to, bluetooth, WIFI, radio frequency, wired transmission, and the like. By listening to the first communication connection 104 by the second earpiece 103 without repeating the establishment of the first communication connection 104 and without re-forwarding all audio data received by the first earpiece 102 from the other device 101 to the second earpiece 103, information transfer between the other device 101 and the two earpieces 102 and 103 can be achieved more efficiently and time differences of information received by the first earpiece 102 and the second earpiece 103 can be reduced, thereby improving its synchronicity.
As shown in fig. 1, the motion parameters of the first earpiece 102 may be transmitted to the second earpiece 103 via the second communication connection 105, such that the audio signals to be played by the first earpiece 102 and the second earpiece 103, respectively, may be adjusted based on the motion parameters, such that the adjusted audio signals simulate a change in the stereo effect of sound caused by the movement of the respective earpiece with respect to the sound source position. By adjusting the stereo effect of the audio signal in near real time in response to the relative movement of the headphones, the position and location of the sound source by the wearer of the headphones is made more realistic, e.g. for a fixed sound source, whether the headphones are moved (panning, rotating, accelerating, etc.), the wearer perceives or is a fixed sound source. In some embodiments, the second communication connection 105 may also be used to transfer synchronized playback information between the first earpiece 102 and the second earpiece 103 for enabling each earpiece in the earpiece assembly to synchronously play audio signals, thereby improving the synchronization effect of the stereo playback between the earpieces, further improving the stereo listening experience of the earpiece wearer.
Fig. 2 shows a flow chart of a stereo processing method 200 of a headset assembly according to an embodiment of the present disclosure. As shown in fig. 2, in step 201, a motion parameter of the first earpiece 102 is detected. The first earpiece 102 has mounted thereon a sensor capable of detecting a motion parameter thereof, including but not limited to an acceleration sensor, a position sensor, an inertial sensor, a gyroscope, etc. The sensor detects the motion parameters of the first earphone including, but not limited to, angular velocity, acceleration, displacement, position and orientation, and the motion parameters may be any one or a combination of several of them. In step 202, the detected motion parameters of the first earpiece 102 are transmitted to the second earpiece 103. In one embodiment, the motion parameters may be transmitted between the various headphones via the second communication connection 105. In step 203, the audio signals to be played by the first and second headphones are respectively adjusted based on the motion parameters, so that the adjusted audio signals simulate sound variations caused by the motion of the corresponding headphones relative to the sound source position. When the position of the wearer of the earphone changes relative to the position of the sound source, i.e. the position of the earphone changes relative to the position of the sound source, the sensor located on the first earphone is able to detect a movement parameter indicating the change of position. Each earphone in the earphone assembly carries out adjustment based on stereo processing on the audio signal to be played based on the motion parameters so as to simulate the audio signal which is supposed to be heard in reality of a wearer after the motion relative to the sound source position is generated. The method adjusts the stereo sound generated by the earphone in time based on the motion parameters so as to simulate the stereo sound after the relative position of the stereo sound and the sound source is changed.
In some embodiments, the adjusting the audio signals to be played by the first earphone and the second earphone based on the motion parameters in step 204 specifically includes: as shown in fig. 2 (a), fig. 2 (a) shows a flowchart of a stereo process based on motion parameters of an earphone assembly according to an embodiment of the present disclosure, and at step 2041, the position and orientation of each of a first earphone and a second earphone with respect to a sound source is determined based on the motion parameters. Individual headphones in the headphone assembly determine their respective position and orientation relative to the sound source by analyzing and calculating them based on the detected or received motion parameters. In step 2042, filter coefficients corresponding to the closest position and orientation relative to the sound source are selected in a predetermined filter list based on the determined position and orientation of the respective headphones. In which a predetermined filter list is stored the position and orientation of the sound source and filter coefficients corresponding to the position and orientation of the sound source, which coefficients can be determined in advance by measurement. In step 2043, audio signals to be played by the respective headphones are subjected to a filtering process using the selected filter coefficients. The filtering process enables the processed filtered signal to simulate stereo changes caused by movement of the headphones relative to the sound source location.
In some embodiments, the filter list storing the position and orientation of the sound source and the filter coefficients corresponding to the position and orientation of the sound source in step 2042 may be predetermined by the following measurement method: step 2042a pre-measures head related transform functions for different positions and orientations relative to the sound source. The head related transfer function (HRTF, head Related Transfer Function) is an audio localization algorithm that is capable of transforming sound effects in a game such that a user perceives the sound source as coming from different positions and directions. Step 2042b determines corresponding filter coefficients based on the pre-measured head related transform functions for the different positions and orientations. Specifically, it is possible to mount a head model at the position of the microphone to the eardrum and then emit sound from a plurality of fixed sound source positions. Analyzing the above sound collected by the microphone, obtaining changed specific sound data by analyzing and calculating the collected sound, and designing a filter based on this to simulate the changed specific sound data. The corresponding filter coefficients are determined with the pre-measured head related transform function as the transfer function of the filter. Step 2042c stores the filter coefficients in association with the position and orientation, thereby constructing the filter list. The position and orientation of the sound source and the corresponding filter coefficients are stored to form a filter list for look-up, facilitating quick determination of the filter coefficients.
In some embodiments, the above-mentioned filter is a digital filter, and the filter coefficients of the digital filter may be adaptively configured according to the motion parameters (one or more of angular velocity, acceleration, position and orientation) of the earphone.
In some embodiments, echoes may also be used to make sound effects more stereo, and filters may be designed to obtain data of specific sounds and echoes that are changed by the sounds and echoes.
In some embodiments, when each of the headphones (the first headphone 102 and the second headphone 103) in the headphone assembly is mono, the positions and orientations of the left-ear headphone and the right-ear headphone of the headphones with respect to the sound source are acquired, respectively; and determining a head-related transformation function of the sound source based on the position and the azimuth of the sound source; and extracting filter coefficients corresponding to the head related transform function from the filter list as left channel filter coefficients of the left ear earphone and right channel filter coefficients of the right ear earphone, respectively.
Fig. 3 (a) shows an output schematic of a mono audio signal according to an embodiment of the present disclosure. As shown in fig. 3 (a), when a mono audio signal is fed to headphones, the left-ear headphones and the right-ear headphones acquire their own position and orientation, respectively, with respect to the sound source, and determine their head-related transform functions, and determine the corresponding filter coefficients by looking up a table. Thus, the left ear earphone filter and the right ear earphone filter are respectively configured with corresponding filter coefficients so as to filter the received audio signals, and finally the left ear earphone outputs left channel audio signals and the right ear earphone outputs right channel audio signals.
In some embodiments, when each earpiece (first earpiece 102 and second earpiece 103) in the earpiece assembly is a binaural channel, the binaural channel includes a left channel and a right channel, for which a first set of positions and orientations of the left and right earpieces, respectively, relative to the sound source are acquired; and respectively acquiring a second set of positions and orientations of the left ear earphone and the right ear earphone relative to the sound source aiming at the right sound channel, wherein the positions and orientations of the left ear earphone and the right ear earphone are included in one set of positions and orientations. A first set of head related transform functions is determined based on the first set of positions and orientations, and a second set of head related transform functions is determined based on the second set of positions and orientations. The left channel filter coefficients of the left ear headphones and the left channel filter coefficients of the right ear headphones corresponding to the first set of head-related transform functions are extracted from the filter list, and the right channel filter coefficients of the left ear headphones and the right channel filter coefficients of the right ear headphones corresponding to the second set of head-related transform functions are extracted from the filter list. The left channel filter coefficient of the left ear earphone and the right channel filter coefficient of the left ear earphone are synthesized to serve as the filter coefficients of the left ear earphone, and the left channel filter coefficient of the right ear earphone and the right channel filter coefficient of the right ear earphone are synthesized to serve as the filter coefficients of the right ear earphone.
Fig. 3 (b) shows an output schematic of a binaural audio signal according to an embodiment of the disclosure. As shown in fig. 3 (b), when a binaural audio signal is fed to headphones, the left-ear headphones and the right-ear headphones acquire their own position and orientation with respect to the sound source for the left channel, respectively, and determine their head-related transform functions, and determine the corresponding filter coefficients by look-up tables. Thus, the left-ear earphone left channel filter and the right-ear earphone left channel filter respectively filter the received left-channel audio signal by using the filter coefficients thereof, and finally the left-channel audio signal filtered by the left-ear earphone left channel filter is transmitted to the left-ear earphone output, and the left-channel audio signal filtered by the right-ear earphone left channel filter is transmitted to the right-ear earphone output. Likewise, the left and right ear headphones also acquire their own position and orientation with respect to the sound source for the right channel, and determine their head-related transform functions, respectively, and determine the corresponding filter coefficients by looking up a table. Therefore, the left-ear earphone right-channel filter and the right-ear earphone right-channel filter respectively filter the received right-channel audio signals by utilizing the filter coefficients thereof, and finally, the right-channel audio signals filtered by the left-ear earphone right-channel filter are transmitted to the left-ear earphone output, and the right-channel audio signals filtered by the right-ear earphone right-channel filter are transmitted to the right-ear earphone output. The left channel audio signal filtered by the left-ear earphone left channel filter and the right channel audio signal filtered by the left-ear earphone right channel filter are synthesized as the output of the left-ear earphone, and the left channel audio signal filtered by the right-ear earphone left channel filter and the right channel audio signal filtered by the right-ear earphone right channel filter are synthesized as the output of the right-ear earphone.
In some embodiments, during a first time period within an nth communication frame, audio data information from the other device 101 is received by the first earpiece 102 via the first communication connection 104 and the audio data information from the other device 101 is intercepted by the second earpiece 103, where N is a natural number. After the communication connection shown in fig. 1 is established, the other device 101 transmits audio data information to the first earphone 102, the first earphone 102 receives the audio data information, and the second earphone 103 can also acquire the audio data information transmitted by the other device 101 based on the listening state, where the above process occurs in the first period 402 of the nth communication frame, as shown in fig. 4. Fig. 4 shows a timing diagram of a stereo processing method of a headphone assembly according to an embodiment of the present disclosure, and as shown in fig. 4, another device 101 transmits audio data information in a first period 402 of an nth frame (i.e., time 401 to time 403).
In some embodiments, the acknowledgement packet is transmitted by the first earpiece 102 and/or the second earpiece 103 to the other device 101 via the first communication connection during the second time period within the n+1th communication frame. Wherein the transmission response packet is a response packet containing ACK/NACK information; the transmission of ACK information to the other device 101 indicates that the first earpiece 102 and the second earpiece 103 successfully receive the audio data information, and the transmission of NACK information indicates that the first earpiece 102 and the second earpiece 103 do not successfully receive the audio data information, and the other device 101 needs to retransmit the audio data information.
As an example, after the number of times the audio data information is retransmitted by the other device 101 reaches the first preset value and when one of the first earpiece 102 and the second earpiece 103 still has failed to receive the audio data information, a response packet indicating that the audio data information was successfully received is transmitted by the other earpiece to the other device 101 via the first communication connection 104, and the audio data information is forwarded to the earpiece that has failed to receive the audio data information.
As an example, after the number of times the other device 101 resends the audio data information reaches the second preset value and neither the first earphone 102 nor the second earphone 103 successfully receives the audio data information, the first earphone 102 and/or the second earphone 103 transmits a response packet indicating that the audio data information was successfully received to the other device 101 via the first communication connection 104 and recovers the audio data information thereof using the packet loss compensation technique. The packet loss compensation technique is to calculate the power spectrum of the received partial audio data information by using the autocorrelation function of the received partial audio data information, and estimate the power spectrum of the missing audio by using the power spectrum, so as to recover the missing audio signal.
Thus, the above embodiment limits the retransmission times of the same audio data packet by another device 101, reduces the time delay of audio transmission, and combines various compensation or correction means to give consideration to the accuracy of audio transmission. The above-described process of transmitting the transmission reply packet occurs in the second period 408 (i.e., instant 407 to time 409) within the n+1th communication frame, and the other device 101 receives the transmission reply packet, as shown in fig. 4.
In some embodiments, the reply packet indicates in a direct or indirect manner the status of the reception of said audio data information by its sender, whereas the successful reception or not of the audio data information may be determined by the first earpiece 102 transmitting information related to the audio data received from the other device 101 via the second communication connection 105 to the second earpiece 103. In fig. 4, the transmission from the first earphone 102 to the second earphone 103 is illustrated as an example, or the transmission from the second earphone 103 to the first earphone 102 may be performed, and all the descriptions in connection with fig. 4 may be adjusted to be applicable to the transmission direction, which is not repeated here. The related information of the audio data may include an indication packet, which indicates, directly or indirectly, a receiving status of the audio data information by its sender.
First, the indication packet may include indication information indicating that the first earpiece 102 is successfully received or not successfully received for the audio data packet from the other device 101, and the second earpiece 103 transmits a transmission response packet including ACK/NACK information to the other device based on the indication information after receiving the indication packet.
Second, the indication packet may include an error correction code packet (also referred to as an ECC packet) containing an error correction code obtained by encoding the audio data received by the first earphone 102 without containing the audio data; only if the audio data is successfully received, the first earpiece 102 encodes the audio data, so that the first earpiece 102 transmits an ECC packet to the second earpiece 103, which itself indicates that the audio data was successfully received, at which time a transmission response packet may be transmitted by the first earpiece 102 to the other device 101, and a transmission response packet may be transmitted by the second earpiece 103 to the other device 101 after receiving the ECC packet. By transmitting the ECC packet instead of the audio data, it is possible to ensure that it obtains correct audio data while significantly reducing the amount of data transmission between two headphones, thereby further increasing the reliability of bluetooth data transmission.
Third, the indication packet may further include an audio data packet received by the first earphone 102 from the other device 101, the first earphone 102 directly packetizes the audio data after successfully receiving the audio data and transmits the audio data packet to the second earphone 103, and then the first earphone 102 transmits a transmission response packet to the other device 101, or the second earphone 103 transmits the transmission response packet to the other device 101 after receiving the audio data packet.
The above-described process of the first earpiece 102 transmitting the indication packet to the second earpiece 103 via the second communication connection 105 occurs during a third time period, which may include a transition time period, within the nth communication frame and the n+1th communication frame in addition to the first time period 402 and the second time period 408. As shown in fig. 4, the third period may be located after the first period 402 in the nth communication frame, i.e., 404 is a transition period (time 403 to 405), and 406 is a third period (time 405 to 407). In some embodiments, the third time period may also be located a time period 410 (time 409 to 411) after the second time period 408 within the n+1th communication frame, which includes the transition time period and the third time period. In some embodiments, the transmission of audio data to the headphones by the other device may be implemented during the first time period 402, the transmission of the indication packets between the various headphones in the headphone assembly may be implemented during the third time period 406, the transmission of the response packets to the other device 101 by the headphones may be implemented during the second time period 408, and the transmission of the motion parameters and/or synchronized playback information between the various headphones may be implemented during the third time period 410, such that the transmission of the different-role information implemented during the time periods 402, 406, 408, 410 is independent of each other and does not interfere with each other.
In some embodiments, the motion parameters are transmitted by the first earpiece 102 to the second earpiece 103 during a third time period, other than the first time period and the second time period, within the nth communication frame and the n+1th communication frame. In a third time period, the motion parameters (and/or the synchronized playback information) are transmitted between the first earpiece 102 and the second earpiece 103 via the second communication connection 105 so that the respective earpiece stereo processes and synchronized playback of the audio signal to be played back, and the second communication connection 105 and the first communication connection 104 are independent of each other. Therefore, based on the motion parameters, the first earphone 102 and the second earphone 103 can generate more accurate surround sound, and the motion parameters are transmitted in a third time period independent of the first time period and the second time period, so that Bluetooth transmission between the earphone and the intelligent device is not interfered, and the stability of communication connection shown in fig. 1 is ensured.
In some embodiments, the motion parameters (and/or the synchronous play information) are integrated in the indication packet to be transmitted together, so that the transmission quantity can be effectively reduced, and the transmission efficiency can be improved. The process of combining the indication packet and the motion parameter (and/or the synchronized playback information) when transmitting based on the bluetooth connection will be described below with reference to fig. 5 (a) and 5 (b).
Fig. 5 (a) shows a schematic diagram of the structure of a bluetooth physical frame according to an embodiment of the present disclosure, and fig. 5 (b) shows a schematic diagram of the structure of a bluetooth physical frame according to another embodiment of the present disclosure. Bluetooth transmissions have two data transmission rates, one being the base rate and the other being the enhancement rate. The basic rate packet format as shown in fig. 5 (a), the bluetooth physical frame includes 3 fields, in the direction from the least significant bit to the most significant bit, an access code 501, a packet header 502, and a payload 503, where: the access code 501 is a flag of piconet for timing synchronization, offset compensation, paging, and inquiry; the header 502 contains information for bluetooth link control; payload 503 carries the payload information, which in this disclosure may be bluetooth audio data. The term "audio data packet" as used herein may refer to the audio data corresponding to the payload 503 after removing the information such as the access code 501 and the packet header 502 from the bluetooth physical frame.
The packet format of the enhanced rate is shown in fig. 5 (b), and the bluetooth physical frame includes 6 fields, which are fields of an access code 504, a packet header 505, a guard interval 506, a sync 507, an enhanced rate payload 508, and a packet tail 509, respectively, in a direction from the least significant bit to the most significant bit, where the access code 504, the packet header 505, and the enhanced rate payload 508 are similar to the access code 501, the packet header 502, and the payload 503, and are not repeated herein. The guard interval 506 represents the interval time between the packet header 505 and the sync 507; the sync 507 comprises a sync sequence, typically used for differential phase shift keying modulation; the packet trailer 509 employs different settings for different modulation schemes. In some embodiments, for synchronized data, at the end of the payload 503 and the rate-of-emphasis payload 508, there may also be, for example, 16 bits for cyclic redundancy check. In some embodiments, the motion information and/or the synchronized playback information may be integrated in the indication packet, so that the effective information and the motion parameters (and/or the synchronized playback information) of the indication packet may be placed in the payload 503 or the enhanced rate payload 508 for transmission, so that the indication packet and the motion information packet and/or the synchronized playback information packet may be combined into one packet (bluetooth physical frame), so as to share the access code 501 or 504, the packet header 502 or 505, and the like, effectively simplify the structure of the bluetooth physical frame, significantly reduce the overall transmission amount (such as the access code and the information in the packet header), reduce the switching time between the plurality of bluetooth physical frames, reduce the control complexity, reduce the mutual interference between the two packets, and further increase the data transmission efficiency. In some embodiments, the receiver of the indication packet initiates reception before the time of transmission of the indication packet, which can improve the accuracy of reception to avoid missing reception.
In addition, the error correction code contained in the ECC packet herein is an error correction code for the audio data in the payload 503 and the enhanced rate payload 508, and various coding methods may be used, including but not limited to, a Re (RS) code, BCH (Bose, ray-Chaudhuri and Hocquenghem) code, and the like. In some embodiments, the ECC packet multiplexes bluetooth protocols at layers above the physical layer, such as the bluetooth medium access control (mac) layer, the bluetooth host control interface layer, etc., and the physical layer may use a symbol rate of 2Mb/s, and the modulation mode may be Quadrature Phase Shift Keying (QPSK) or Gaussian Frequency Shift Keying (GFSK). The Bluetooth physical layer can adopt a symbol rate of 1Mb/s, and the ECC packet adopts a higher symbol rate, so that more error correction bits can be transmitted, and better error correction capability can be realized.
The present disclosure also relates to a stereo processing system, fig. 6 shows a schematic diagram of a stereo processing system according to an embodiment of the present disclosure, and as shown in fig. 6, a system 600 includes a detection module 601, a transmission module 602, and an adjustment module 603. The detection module 601 is configured to detect a motion parameter of the first earphone; the transmission module 602 is configured to transmit the detected motion parameter of the first earpiece to the second earpiece; the adjustment module 603 is configured to adjust the audio signals to be played by the first and second headphones, respectively, based on the motion parameters, such that the adjusted audio signals simulate sound variations resulting from the motion of the corresponding headphones relative to the sound source position.
In some embodiments, the adjustment module 603 is specifically configured to: determining a position and orientation of each of the first earpiece and the second earpiece relative to the sound source based on the motion parameter; selecting a filter coefficient corresponding to the closest position and orientation relative to the sound source in a predetermined filter list based on the determined position and orientation of each earphone relative to the sound source; the audio signals to be played by the respective headphones are subjected to a filtering process using the selected filter coefficients.
In some embodiments, the adjustment module 603 is specifically configured to pre-determine the filter list by: pre-measuring head related transform functions for different positions and orientations relative to the sound source; determining corresponding filter coefficients based on pre-measured head related transform functions of different positions and orientations; filter coefficients are stored in association with the position and orientation, thereby constructing a filter list.
In some embodiments, the adjustment module 603 is specifically configured to, when the audio signal is mono: the method comprises the steps of respectively obtaining the positions and the orientations of a left ear earphone and a right ear earphone relative to a sound source; determining a head related transform function based on the position and orientation of the sound source; filter coefficients corresponding to the head related transform function are extracted from the filter list as left channel filter coefficients of the left ear phone and right channel filter coefficients of the right ear phone, respectively.
In some embodiments, the adjustment module 603 is specifically configured to, when the audio signal is a binaural channel, the binaural channel comprises a left channel and a right channel: for a left channel, respectively acquiring a first group of positions and orientations of a left ear earphone and a right ear earphone relative to a sound source, and for a right channel, respectively acquiring a second group of positions and orientations of the left ear earphone and the right ear earphone relative to the sound source; determining a first set of head related transform functions based on the first set of positions and orientations, determining a second set of head related transform functions based on the second set of positions and orientations; extracting left channel filter coefficients of a left ear earphone and left channel filter coefficients of a right ear earphone corresponding to the first set of head related transform functions from the filter list, and extracting right channel filter coefficients of the left ear earphone and right channel filter coefficients of the right ear earphone corresponding to the second set of head related transform functions from the filter list; the left channel audio signal filtered by the left channel filter of the left ear earphone and the right channel audio signal filtered by the right channel filter of the left ear earphone are synthesized as the output of the left ear earphone, and the left channel audio signal filtered by the left channel filter of the right ear earphone and the right channel audio signal filtered by the right channel filter of the right ear earphone are synthesized as the output of the right ear earphone.
In some embodiments, the motion parameters include any one or more of angular velocity, acceleration, displacement, position, and orientation.
The system can timely adjust the stereo generated by the earphone based on the motion parameters so as to simulate the stereo with the changed relative position with the sound source.
The present disclosure also relates to a headset assembly configured to include at least a first headset and a second headset. The first earphone detects the motion parameters of the first earphone by using a detection device arranged on the first earphone and transmits the detected motion parameters of the first earphone to the second earphone; the first earphone and the second earphone respectively adjust audio signals to be played based on the motion parameters, so that the adjusted audio signals simulate sound changes caused by the motion of the corresponding earphone relative to the sound source position. The earphone can timely adjust the stereo generated by the earphone based on the motion parameters so as to simulate the stereo with the changed relative position with the sound source.
Furthermore, although exemplary embodiments have been described herein, the scope thereof includes any and all embodiments having equivalent elements, modifications, omissions, combinations (e.g., of the various embodiments across schemes), adaptations or alterations based on the present disclosure. Elements in the claims are to be construed broadly based on the language employed in the claims and are not limited to examples described in the present specification or during the practice of the present application, which examples are to be construed as non-exclusive. It is intended, therefore, that the specification and examples be considered as exemplary only, with a true scope and spirit being indicated by the following claims and their full scope of equivalents.
The above description is intended to be illustrative and not restrictive. For example, the above-described examples (or one or more aspects thereof) may be used in combination with each other. For example, other embodiments may be used by those of ordinary skill in the art upon reading the above description. In addition, in the above detailed description, various features may be grouped together to streamline the disclosure. This is not to be interpreted as an intention that the disclosed features not being claimed are essential to any claim. Rather, inventive subject matter may lie in less than all features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the detailed description as examples or embodiments, with each claim standing on its own as a separate embodiment, and it is contemplated that these embodiments may be combined with one another in various combinations or permutations. The scope of the invention should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.

Claims (14)

1. A stereo processing method of an earphone assembly, the earphone assembly including a first earphone and a second earphone, the stereo processing method comprising:
detecting a motion parameter of the first earphone;
Transmitting the detected motion parameters of the first earphone to the second earphone; and
respectively adjusting the audio signals to be played by the first earphone and the second earphone based on the motion parameters so that the adjusted audio signals simulate sound changes caused by the motion of the corresponding earphone relative to the sound source position, wherein the method specifically comprises the following steps:
determining a position and orientation of each of the first earpiece and the second earpiece relative to the sound source based on the motion parameter;
selecting a nearest filter coefficient corresponding to the position and orientation of the sound source in a predetermined filter list based on the determined position and orientation of each earphone relative to the sound source, the filter list being predetermined by: pre-measuring head related transform functions for different positions and orientations relative to the sound source; determining the corresponding filter coefficients based on the head related transform functions of the different positions and orientations measured in advance; storing the filter coefficients in association with the position and orientation, thereby constructing the filter list;
the audio signals to be played by the respective headphones are subjected to a filtering process using the selected filter coefficients.
2. The stereo processing method according to claim 1, wherein when the audio signal is mono:
the position and the azimuth of the left ear earphone and the right ear earphone relative to the sound source are respectively obtained;
determining the head related transform function based on the position and orientation of the sound source; and
and extracting the filter coefficients corresponding to the head related transformation function from the filter list to serve as left channel filter coefficients of the left ear earphone and right channel filter coefficients of the right ear earphone respectively.
3. The stereo processing method according to claim 1, wherein when the audio signal is a binaural channel, the binaural channel includes a left channel and a right channel:
respectively acquiring a first group of positions and orientations of a left ear earphone and a right ear earphone relative to the sound source for the left channel, and respectively acquiring a second group of positions and orientations of the left ear earphone and the right ear earphone relative to the sound source for the right channel;
determining a first set of head related transform functions based on the first set of positions and orientations, determining a second set of head related transform functions based on the second set of positions and orientations;
extracting left channel filter coefficients of the left ear headphones and left channel filter coefficients of the right ear headphones corresponding to the first set of head-related transform functions from the filter list, extracting right channel filter coefficients of the left ear headphones and right channel filter coefficients of the right ear headphones corresponding to the second set of head-related transform functions from the filter list; and
And synthesizing a left channel audio signal filtered by a left channel filter of the left ear earphone and a right channel audio signal filtered by a right channel filter of the left ear earphone as outputs of the left ear earphone, and synthesizing a left channel audio signal filtered by the left channel filter of the right ear earphone and a right channel audio signal filtered by the right channel filter of the right ear earphone as outputs of the right ear earphone.
4. A method for stereo processing according to claim 1, wherein,
receiving, by the first earpiece, audio data information from another device via a first communication connection and listening, by the second earpiece, to the audio data information from the other device in a first time period within an nth communication frame, N being a natural number;
transmitting, by the first earpiece and/or the second earpiece, a reply packet to the other device via the first communication connection during a second time period within an n+1th communication frame, the reply packet indicating in a direct or indirect manner a reception status of the audio data information by its sender;
and transmitting the motion parameter to the second earphone by the first earphone in a third time period except the first time period and the second time period in the Nth communication frame and the (n+1) th communication frame.
5. The stereo processing method of claim 4, wherein the motion parameter and synchronized playback information are transmitted by the first earpiece to the second earpiece during the third time period.
6. The method according to claim 4, wherein the other device retransmits the audio data information when the response packet indicates that the reception condition of the audio data information by the sender thereof is reception failure.
7. A method for stereo processing according to claim 6, wherein,
after the number of times of retransmitting the audio data information by the other device reaches a first preset value and when one of the first earphone and the second earphone still does not successfully receive the audio data information, transmitting a response packet indicating that the audio data information is successfully received to the other device by the other earphone through the first communication connection, and forwarding the audio data information to the one earphone.
8. A method for stereo processing according to claim 6, wherein,
and after the other device resends the audio data information for a second preset value and when the first earphone and the second earphone do not successfully receive the audio data information, the first earphone and/or the second earphone transmit a response packet indicating that the audio data information is successfully received to the other device through the first communication connection, and the audio data information is recovered by utilizing a packet loss compensation technology.
9. The stereo processing method according to claim 1, wherein the motion parameter includes any one or more of angular velocity, acceleration, displacement, position, and orientation.
10. A stereo processing system of a headset assembly, the headset assembly comprising a first headset and a second headset, the stereo processing system comprising:
a detection module configured to detect a motion parameter of the first earpiece;
a transmission module configured to transmit the detected motion parameter of the first earpiece to the second earpiece; and
the adjusting module is configured to respectively adjust the audio signals to be played by the first earphone and the second earphone based on the motion parameters, so that the adjusted audio signals simulate sound changes caused by the motion of the corresponding earphone relative to the sound source position, and specifically comprises the following steps:
determining a position and orientation of each of the first earpiece and the second earpiece relative to the sound source based on the motion parameter;
selecting a nearest filter coefficient corresponding to the position and orientation of the sound source in a predetermined filter list based on the determined position and orientation of each earphone relative to the sound source, the filter list being predetermined by: pre-measuring head related transform functions for different positions and orientations relative to the sound source; determining the corresponding filter coefficients based on the head related transform functions of the different positions and orientations measured in advance; storing the filter coefficients in association with the position and orientation, thereby constructing the filter list;
The audio signals to be played by the respective headphones are subjected to a filtering process using the selected filter coefficients.
11. The stereo processing system of claim 10, wherein the adjustment module is configured to, when the audio signal is mono:
the position and the azimuth of the left ear earphone and the right ear earphone relative to the sound source are respectively obtained;
determining the head related transform function based on the position and orientation of the sound source; and
and extracting the filter coefficients corresponding to the head related transformation function from the filter list to serve as left channel filter coefficients of the left ear earphone and right channel filter coefficients of the right ear earphone respectively.
12. The stereo processing system of claim 10, wherein the adjustment module is configured to, when the audio signal is two-channel, the two-channel comprises a left channel and a right channel:
respectively acquiring a first group of positions and orientations of a left ear earphone and a right ear earphone relative to the sound source for the left channel, and respectively acquiring a second group of positions and orientations of the left ear earphone and the right ear earphone relative to the sound source for the right channel;
Determining a first set of head related transform functions based on the first set of positions and orientations, determining a second set of head related transform functions based on the second set of positions and orientations;
extracting left channel filter coefficients of the left ear headphones and left channel filter coefficients of the right ear headphones corresponding to the first set of head-related transform functions from the filter list, extracting right channel filter coefficients of the left ear headphones and right channel filter coefficients of the right ear headphones corresponding to the second set of head-related transform functions from the filter list; and
and synthesizing a left channel audio signal filtered by a left channel filter of the left ear earphone and a right channel audio signal filtered by a right channel filter of the left ear earphone as outputs of the left ear earphone, and synthesizing a left channel audio signal filtered by the left channel filter of the right ear earphone and a right channel audio signal filtered by the right channel filter of the right ear earphone as outputs of the right ear earphone.
13. The stereo processing system of claim 10, wherein the motion parameters include any one or more of angular velocity, acceleration, displacement, position, and orientation.
14. A headset assembly, the headset assembly configured to:
the earphone comprises at least a first earphone and a second earphone, wherein the first earphone detects the motion parameter of the first earphone by using a detection device arranged on the first earphone and transmits the detected motion parameter of the first earphone to the second earphone;
the first earphone and the second earphone respectively adjust audio signals to be played based on the motion parameters so that the adjusted audio signals simulate sound changes caused by the motion of the corresponding earphone relative to the sound source position, and the method specifically comprises the following steps:
determining a position and orientation of each of the first earpiece and the second earpiece relative to the sound source based on the motion parameter;
selecting a nearest filter coefficient corresponding to the position and orientation of the sound source in a predetermined filter list based on the determined position and orientation of each earphone relative to the sound source, the filter list being predetermined by: pre-measuring head related transform functions for different positions and orientations relative to the sound source; determining the corresponding filter coefficients based on the head related transform functions of the different positions and orientations measured in advance; storing the filter coefficients in association with the position and orientation, thereby constructing the filter list;
The audio signals to be played by the respective headphones are subjected to a filtering process using the selected filter coefficients.
CN201911377379.7A 2019-12-27 2019-12-27 Stereo processing method and system for earphone assembly and earphone assembly Active CN111142665B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911377379.7A CN111142665B (en) 2019-12-27 2019-12-27 Stereo processing method and system for earphone assembly and earphone assembly

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911377379.7A CN111142665B (en) 2019-12-27 2019-12-27 Stereo processing method and system for earphone assembly and earphone assembly

Publications (2)

Publication Number Publication Date
CN111142665A CN111142665A (en) 2020-05-12
CN111142665B true CN111142665B (en) 2024-02-06

Family

ID=70520957

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911377379.7A Active CN111142665B (en) 2019-12-27 2019-12-27 Stereo processing method and system for earphone assembly and earphone assembly

Country Status (1)

Country Link
CN (1) CN111142665B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112235690B (en) * 2020-10-13 2022-05-10 恒玄科技(上海)股份有限公司 Method and device for adjusting audio signal, earphone assembly and readable storage medium
CN112612445A (en) * 2020-12-28 2021-04-06 维沃移动通信有限公司 Audio playing method and device
CN114543844A (en) * 2021-04-09 2022-05-27 恒玄科技(上海)股份有限公司 Audio playing processing method and device of wireless audio equipment and wireless audio equipment
CN114363770B (en) * 2021-12-17 2024-03-26 北京小米移动软件有限公司 Filtering method and device in pass-through mode, earphone and readable storage medium
CN114745637A (en) * 2022-04-14 2022-07-12 刘道正 Sound effect realization method of wireless audio equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB9326092D0 (en) * 1993-12-21 1994-02-23 Central Research Lab Ltd Apparatus and method for audio signal balance control
CN1277532A (en) * 1999-06-10 2000-12-20 三星电子株式会社 Multiple-channel audio frequency replaying apparatus and method
WO2008106680A2 (en) * 2007-03-01 2008-09-04 Jerry Mahabub Audio spatialization and environment simulation
EP2928213A1 (en) * 2014-04-04 2015-10-07 GN Resound A/S A hearing aid with improved localization of a monaural signal source
GB201517844D0 (en) * 2015-10-08 2015-11-25 Two Big Ears Ltd Binaural synthesis
CN109660971A (en) * 2018-12-05 2019-04-19 恒玄科技(上海)有限公司 Wireless headset and communication means for wireless headset

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7599498B2 (en) * 2004-07-09 2009-10-06 Emersys Co., Ltd Apparatus and method for producing 3D sound
US9432778B2 (en) * 2014-04-04 2016-08-30 Gn Resound A/S Hearing aid with improved localization of a monaural signal source
US20170223474A1 (en) * 2015-11-10 2017-08-03 Bender Technologies, Inc. Digital audio processing systems and methods

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB9326092D0 (en) * 1993-12-21 1994-02-23 Central Research Lab Ltd Apparatus and method for audio signal balance control
CN1277532A (en) * 1999-06-10 2000-12-20 三星电子株式会社 Multiple-channel audio frequency replaying apparatus and method
WO2008106680A2 (en) * 2007-03-01 2008-09-04 Jerry Mahabub Audio spatialization and environment simulation
CN101960866A (en) * 2007-03-01 2011-01-26 杰里·马哈布比 Audio spatialization and environment simulation
EP2928213A1 (en) * 2014-04-04 2015-10-07 GN Resound A/S A hearing aid with improved localization of a monaural signal source
GB201517844D0 (en) * 2015-10-08 2015-11-25 Two Big Ears Ltd Binaural synthesis
CN109660971A (en) * 2018-12-05 2019-04-19 恒玄科技(上海)有限公司 Wireless headset and communication means for wireless headset

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
岳大为 ; 东楷涵 ; 刘作军 ; 王德峰 ; .基于音频带通滤波的排险机器人导航方法.河北工业大学学报.2013,(第02期),全文. *
张宗帅 ; 顾亚平 ; 张俊 ; 杨小平 ; .基于HRTF的虚拟声源定位.网络新媒体技术.2015,(第02期),全文. *
罗福元,王行仁.虚拟座舱中的三维音响.北京航空航天大学学报.1999,(第03期),全文. *

Also Published As

Publication number Publication date
CN111142665A (en) 2020-05-12

Similar Documents

Publication Publication Date Title
CN109660971B (en) Wireless earphone and communication method for wireless earphone
CN111142665B (en) Stereo processing method and system for earphone assembly and earphone assembly
CN109561419B (en) The ears wireless headset of high reliability and communication means for ears wireless headset
CN111031437B (en) Wireless earphone assembly and communication method thereof
CN110769347B (en) Synchronous playing method of earphone assembly and earphone assembly
US10798477B2 (en) Wireless audio system and method for wirelessly communicating audio information using the same
US10348370B2 (en) Wireless device communication
CN112020136B (en) Audio system and wireless earphone pair
CN110636487B (en) Wireless earphone and communication method thereof
CN110708142A (en) Audio data communication method, system and equipment
CN111741401B (en) Wireless communication method for wireless headset assembly and wireless headset assembly
US11418297B2 (en) Systems and methods including wireless data packet retransmission schemes
CN112039637A (en) Audio data communication method and system and audio communication equipment
US20230344535A1 (en) Robust broadcast via relayed retransmission
WO2021217723A1 (en) Systems and methods for wireless transmission of audio information
EP4184938A1 (en) Communication method and device used for wireless dual earphones
CN112335328A (en) Method and system for transmitting audio data and wireless audio system
CN111955018B (en) Method and system for connecting an audio accessory device with a client computing device
KR20150130894A (en) Method and apparatus for communicating audio data
CN112235690B (en) Method and device for adjusting audio signal, earphone assembly and readable storage medium
US10778479B1 (en) Systems and methods for wireless transmission of audio information
CN114079537B (en) Audio packet loss data receiving method, device, audio playing equipment and system
EP4325884A1 (en) Head-mounted wireless earphones and communication method therefor
EP4262229A1 (en) Wireless headphones and audio device
CN114079899A (en) Bluetooth communication data processing circuit, packet loss processing method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant