CN107742523B - Voice signal processing method and device and mobile terminal - Google Patents

Voice signal processing method and device and mobile terminal Download PDF

Info

Publication number
CN107742523B
CN107742523B CN201711140814.5A CN201711140814A CN107742523B CN 107742523 B CN107742523 B CN 107742523B CN 201711140814 A CN201711140814 A CN 201711140814A CN 107742523 B CN107742523 B CN 107742523B
Authority
CN
China
Prior art keywords
voice signal
microphone
mobile terminal
time period
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711140814.5A
Other languages
Chinese (zh)
Other versions
CN107742523A (en
Inventor
杨宗业
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201711140814.5A priority Critical patent/CN107742523B/en
Publication of CN107742523A publication Critical patent/CN107742523A/en
Application granted granted Critical
Publication of CN107742523B publication Critical patent/CN107742523B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L2021/02082Noise filtering the noise being echo, reverberation of the speech
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L2021/02161Number of inputs available containing the signal or the noise to be suppressed
    • G10L2021/02165Two microphones, one receiving mainly the noise signal and the other one mainly the speech signal

Abstract

The embodiment of the application discloses a voice signal processing method and device and a mobile terminal. The method comprises the following steps: the method comprises the steps of obtaining a first voice signal received by a main microphone in a current first time period and a second voice signal received by an auxiliary microphone in the current first time period, wherein the first time period is a time period when the mobile terminal is in a hands-free call state and no downlink voice signal is detected. And taking the voice signal with the larger intensity in the first voice signal and the second voice signal as a main input signal for the microphone noise reduction processing output in the current first time period, and taking the voice signal with the smaller intensity as a secondary input signal for the microphone noise reduction processing output in the current first time period. The method ensures that a user speaks towards any microphone, and the uplink voice signal sound obtained after the microphone is subjected to noise reduction processing has higher intensity, so that the quality of the uplink voice is improved while the noise reduction effect of the microphone is ensured.

Description

Voice signal processing method and device and mobile terminal
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for processing a voice signal, and a mobile terminal.
Background
In order to eliminate the noise in the environment collected by the mobile terminal during the sound collection process, a dual microphone or even a multi-microphone is usually disposed in the mobile terminal for performing microphone noise reduction processing. In the case of a dual microphone, one microphone is a microphone with a main input, and the other microphone is a microphone with a sub input, in the microphone noise reduction process, the mobile terminal performs a filtering process on the sound signal collected by the microphone with the sub input and the sound signal collected by the microphone with the main input to filter out the noise in the sound signal collected by the main microphone, and then uses the sound signal collected by the main microphone after the filtering process as an uplink speech signal.
In the hands-free mode of the mobile terminal, a microphone arranged beside a bottom loudspeaker of the mobile terminal is used as a secondary input, and a microphone arranged at the top of the mobile terminal or near a camera on the back is used as a main input, but in the hands-free mode, the main and secondary microphone arrangement mode causes the uplink voice quality after the noise reduction processing of the microphone to be poor.
Disclosure of Invention
In view of the foregoing problems, the present application provides a method and an apparatus for processing a voice signal, and a mobile terminal, so as to improve the quality of uplink voice.
In a first aspect, the present application provides a speech signal processing method, which is applied to a mobile terminal, where the mobile terminal includes a terminal main body, a main microphone, an auxiliary microphone and a speaker, the main microphone and the speaker are both disposed at a first end of the terminal main body, and the auxiliary microphone is disposed at a second end of the terminal main body opposite to the first end. The method comprises the following steps: acquiring a first voice signal received by the main microphone in a current first time period and a second voice signal received by the auxiliary microphone in the current first time period, wherein the first time period is a time period when the mobile terminal is in a hands-free call state and no downlink voice signal is detected; judging the strength of the first voice signal and the second voice signal; and taking the voice signal with the larger intensity in the first voice signal and the second voice signal as a main input signal for microphone noise reduction processing output in the current first time period, and taking the voice signal with the smaller intensity in the first voice signal and the second voice signal as a secondary input signal for microphone noise reduction processing output in the current first time period.
In a second aspect, the present application provides a speech signal processing apparatus, which operates on a mobile terminal, the mobile terminal includes a terminal main body, a main microphone, an auxiliary microphone and a speaker, the main microphone and the speaker are both disposed at a first end of the terminal main body, and the auxiliary microphone is disposed at a second end opposite to the first end of the terminal main body. The device comprises: a voice signal acquiring unit, configured to acquire a first voice signal received by the primary microphone in a current first time period, and a second voice signal received by the secondary microphone in the current first time period, where the first time period is a time period when the mobile terminal is in a hands-free call state and no downlink voice signal is detected; the judging unit is used for judging the strength of the first voice signal and the second voice signal; and the signal processing unit is used for taking the voice signal with the higher intensity in the first voice signal and the second voice signal as a main input signal which is output in the current first time period and used for the microphone noise reduction processing, and taking the voice signal with the lower intensity in the first voice signal and the second voice signal as a secondary input signal which is output in the current first time period and used for the microphone noise reduction processing.
In a third aspect, the present application provides a mobile terminal comprising one or more processors and a memory; one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the above-described methods.
In a fourth aspect, the present application provides a computer-readable storage medium comprising a stored program, wherein the method described above is performed when the program is run.
After acquiring a first voice signal received by the main microphone in a current first time period and a second voice signal received by the auxiliary microphone in the current first time period, comparing the intensities of the first voice signal and the second voice signal, using a voice signal with a larger intensity as a main input signal for microphone noise reduction processing output in the current first time period, and using a voice signal with a smaller intensity as an auxiliary input signal for microphone noise reduction processing output in the current first time period, so that the signal intensity of the main input is larger than that of the auxiliary input in microphone noise reduction processing, thereby enabling a user to face any microphone, and the uplink voice signal sound obtained after microphone noise reduction processing has a higher intensity, and then the quality of the uplink voice is improved while the noise reduction effect of the noise reduction processing of the microphone is ensured.
These and other aspects of the present application will be more readily apparent from the following description of the embodiments.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 shows a flow chart of a speech signal processing method proposed in the present application;
FIG. 2 is a flow chart illustrating another speech signal processing method proposed in the present application;
FIG. 3 is a flow chart of another speech signal processing method proposed in the present application;
fig. 4 is a block diagram illustrating a structure of a speech signal processing apparatus proposed in the present application;
fig. 5 is a block diagram showing another speech signal processing apparatus proposed in the present application;
fig. 6 is a block diagram illustrating a structure of another speech signal processing apparatus proposed in the present application;
fig. 7 is a schematic structural diagram of a mobile terminal proposed in the present application;
fig. 8 shows a block diagram of a server of the present application for performing a speech signal processing method according to an embodiment of the present application;
fig. 9 is a schematic diagram illustrating a first view of a mobile terminal according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Generally, in order to eliminate the ambient noise during the voice call, the mobile terminal is provided with two or more microphones for performing the microphone noise reduction processing. In the conversation process, the mobile terminal can perform mutual offset filtering on voice signals collected by two or more microphones so as to achieve the purpose of reducing the environmental noise. In a specific microphone noise reduction processing process, the mobile terminal takes a voice signal acquired by one microphone as a main input in the noise reduction processing, takes a voice signal acquired by the other microphone as an auxiliary input in the noise reduction processing, namely as a reference signal, is favorable for eliminating environmental noise in the main input voice signal by the auxiliary input voice signal, and finally takes the main input voice signal after the noise reduction processing as uplink voice.
In the hands-free call mode, the mobile terminal takes the voice signal collected by a microphone arranged at the back or the top of the mobile terminal as a main input voice signal, and takes the voice signal collected by a microphone arranged beside a loudspeaker at the bottom of the mobile terminal as an auxiliary input voice signal. However, the inventor finds that, in the hands-free call mode, a user is usually used to speak into a microphone at the bottom of the mobile terminal, so that the intensity of the main input voice signal is smaller than that of the auxiliary input voice signal, and further, after the noise reduction processing is performed by the microphone, the uplink voice signal is small, and particularly, in some noisy environments, the processed sound is stuffy.
Therefore, the inventor proposes a voice signal processing method, a voice signal processing device and a mobile terminal for improving uplink voice quality in the application.
Embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Referring to fig. 1, a voice signal processing method provided by the present application is applied to a mobile terminal, where the mobile terminal includes a terminal main body, a main microphone, an auxiliary microphone and a speaker, the main microphone and the speaker are both disposed at a first end of the terminal main body, and the auxiliary microphone is disposed at a second end of the terminal main body, opposite to the first end. The method comprises the following steps:
step S110: and acquiring a first voice signal received by the main microphone in a current first time period and a second voice signal received by the auxiliary microphone in the current first time period, wherein the first time period is a time period when the mobile terminal is in a hands-free call state and no downlink voice signal is detected.
As one mode, the downlink voice signal is a voice signal transmitted to the mobile terminal by a communication base station or a server, and the like, and is used for the mobile terminal to play. When the mobile terminal detects that no downlink voice signal exists and the main microphone and the auxiliary microphone collect voice signals, the mobile terminal starts to acquire the voice signal collected by the first microphone and starts to acquire the voice signal collected by the second microphone, after a preset time period is collected and no downlink voice signal is detected in the preset time period, the time period is determined as a first time period, and the first voice signal collected by the main microphone in the first time period and the second voice signal collected by the auxiliary microphone are acquired.
Step S120: and judging the strength of the first voice signal and the second voice signal.
As one way, the mobile terminal compares the signal amplitude strengths of the first voice signal and the second voice signal when determining the signal amplitudes of the first voice signal and the second voice signal. For example, the average strength of the amplitudes of the first speech signal and the second speech signal in the first time period.
Step S130: and taking the voice signal with the larger intensity in the first voice signal and the second voice signal as a main input signal for microphone noise reduction processing output in the current first time period, and taking the voice signal with the smaller intensity in the first voice signal and the second voice signal as a secondary input signal for microphone noise reduction processing output in the current first time period.
It should be noted that, in the process of acquiring voice in the hands-free mode, when a downlink voice signal is detected in the preset time period, a voice signal acquired by the secondary microphone in the preset time period is taken as a main input voice signal in the microphone noise reduction processing, and a voice signal acquired by the primary microphone is taken as a secondary input voice signal in the microphone noise reduction processing, so as to prevent an echo caused by the previous downlink voice carried in an uplink voice signal.
In the speech signal processing method provided by the present application, after acquiring the first speech signal received by the main microphone in the current first time period and the second speech signal received by the secondary microphone in the current first time period, comparing the intensities of the first speech signal and the second speech signal, using the speech signal with larger intensity as the main input signal for the microphone noise reduction processing output in the current first time period, and using the speech signal with smaller intensity as the secondary input signal for the microphone noise reduction processing output in the current first time period, so that in the microphone noise reduction processing, the signal intensity of the main input is larger than that of the secondary input, thereby making a user speak to any microphone, and the uplink speech signal sound obtained after the microphone noise reduction processing has higher intensity, and sound is not stuffy after the noise reduction treatment, so that the quality of uplink voice is improved while the noise reduction effect of the noise reduction treatment of the microphone is ensured.
Referring to fig. 2, a voice signal processing method provided by the present application is applied to a mobile terminal, where the mobile terminal includes a terminal main body, a main microphone, an auxiliary microphone and a speaker, the main microphone and the speaker are both disposed at a first end of the terminal main body, and the auxiliary microphone is disposed at a second end of the terminal main body, opposite to the first end. The method comprises the following steps:
step S210: and acquiring a first voice signal received by the main microphone in a current first time period and a second voice signal received by the auxiliary microphone in the current first time period, wherein the first time period is a time period when the mobile terminal is in a hands-free call state and no downlink voice signal is detected.
Step S220: and judging the strength of the first voice signal and the second voice signal.
Step S230: and acquiring and judging the voice signal with higher intensity in the first voice signal and the second voice signal.
Step S240: and adjusting the strength of the voice signal with larger strength based on the main input voice signal determined in the previous first time period, so that the strength difference between the main input voice signal determined in the previous first time period and the voice signal with larger strength meets a preset value.
When the user speaks continuously, the mobile terminal caches the voices collected by the main microphone and the auxiliary microphone, and the length of the first time period may be a time length of a voice signal that can be cached by the mobile terminal. When the buffer is full, the voice data written into the buffer later will cover the original voice data in the buffer, so there will be a plurality of continuous first time periods during the user speaking.
As a mode, after determining the voice signal with the higher intensity in the first voice signal and the second voice signal, the voice signal with the higher intensity may be compared with a preset intensity value, so as to determine that the collected voice signal is the voice spoken by the user, and when the voice signal with the higher intensity is greater than the preset intensity value, the voice signal with the higher intensity is used as the main input voice signal of the microphone noise reduction processing.
In the hands-free call mode, the intensities of the voice signals collected by the primary microphone and the secondary microphone may be changed alternately, and as an embodiment, the signal intensities of the primary input voice signals output in a plurality of consecutive first time periods may be smoothed, that is, the mobile terminal may adjust the intensity of the voice signal with higher intensity in the current first time period based on the primary input voice signal determined in the previous first time period, so that the intensity difference between the primary input voice signal determined in the previous first time period and the voice signal with higher intensity satisfies a preset value, so as to avoid abrupt change of the signal intensities.
It should be noted that the preset value can be set according to actual situations. For example, the preset value may be set to zero, or may be any other number.
Step S250: and taking the adjusted voice signal as a main input signal which is output in the current first time period and used for microphone noise reduction processing, and taking one voice signal with smaller intensity in the first voice signal and the second voice signal as an auxiliary input signal which is output in the current first time period and used for microphone noise reduction processing.
According to the voice signal processing method, after the voice signal with the larger intensity in the first voice signal and the second voice signal is obtained, the voice signal with the larger intensity is judged to meet the preset value, then the voice signal with the larger intensity is used as the main input signal for the noise reduction processing of the microphone output in the current first time period, and the voice signal with the smaller intensity is used as the auxiliary input signal for the noise reduction processing of the microphone output in the current first time period, so that in the noise reduction processing of the microphone, the sound of the uplink voice signal obtained after the noise reduction processing of the microphone has the higher intensity and cannot be smaller, the noise reduction effect of the noise reduction processing of the microphone is guaranteed, and meanwhile the quality of the uplink voice is improved. And when processing the voice signals in a plurality of continuous first time periods, smoothing processing is carried out to prevent sudden change of the voice signal intensity in the main input switching process.
Referring to fig. 3, a voice signal processing method provided by the present application is applied to a mobile terminal, where the mobile terminal includes a terminal main body, a main microphone, an auxiliary microphone and a speaker, the main microphone and the speaker are both disposed at a first end of the terminal main body, and the auxiliary microphone is disposed at a second end of the terminal main body, opposite to the first end. The method comprises the following steps:
step S310: and acquiring a first voice signal received by the main microphone in a current first time period and a second voice signal received by the auxiliary microphone in the current first time period, wherein the first time period is a time period when the mobile terminal is in a hands-free call state and no downlink voice signal is detected.
Step S320: and judging the strength of the first voice signal and the second voice signal.
Step S330: and acquiring the voice signal with higher intensity in the first voice signal and the second voice signal which are judged.
Step S340: and judging whether the mobile terminal is in a preset posture, wherein when the mobile terminal is in the preset posture, a microphone receiving the voice signal with the higher intensity is closer to a user.
Generally, a user holds a mobile terminal in a certain manner when using the mobile terminal for a hands-free call. For example, when a user is talking in a hands-free mode, the mobile terminal is held in hand and the first end provided with the main microphone is directed towards the user himself. In this mode, if it is detected that the intensity of the voice signal collected by the secondary microphone is greater than that collected by the primary microphone, it can be determined that the voice signal is not sent by the mobile terminal user, which is not beneficial to improving the quality of the uplink voice signal of the user.
As an implementation, the mobile terminal pose may be associated with a microphone that collects a voice signal with a greater intensity in the present application. For example, when the mobile terminal is in the first posture, the corresponding relationship is established between the mobile terminal and the strength of the voice signal collected by the main microphone is larger, and when the mobile terminal is in the second posture, the corresponding relationship is established between the mobile terminal and the strength of the voice signal collected by the auxiliary microphone is larger.
It should be noted that in the present application, the gesture data of the mobile terminal may be collected through a gyroscope, an accelerometer, an electronic compass, and other devices arranged in the mobile terminal, so as to determine the gesture of the mobile terminal.
Step S350: and when the mobile terminal is in the preset posture, taking the voice signal with the higher intensity as a main input signal for microphone noise reduction processing output in the current first time period, and taking the voice signal with the lower intensity in the first voice signal and the second voice signal as an auxiliary input signal for microphone noise reduction processing output in the current first time period.
And when the mobile terminal is not in the preset posture, ending the voice signal processing flow.
According to the voice signal processing method, the signal intensity of the main input is larger than that of the auxiliary input in the microphone noise reduction processing, so that a user speaks towards any microphone, uplink voice signal sound obtained after the microphone noise reduction processing has higher intensity, and the quality of uplink voice is improved while the noise reduction effect of the microphone noise reduction processing is ensured. And moreover, the uplink voice signal is improved by detecting the attitude data of the mobile terminal.
Referring to fig. 4, a speech signal processing apparatus 400 provided by the present application operates in a mobile terminal, where the mobile terminal includes a terminal main body, a main microphone, an auxiliary microphone and a speaker, the main microphone and the speaker are both disposed at a first end of the terminal main body, and the auxiliary microphone is disposed at a second end of the terminal main body, opposite to the first end. The apparatus 400 comprises: a voice signal acquiring unit 410, a judging unit 420 and a signal processing unit 430.
A voice signal obtaining unit 410, configured to obtain a first voice signal received by the primary microphone in a current first time period, and a second voice signal received by the secondary microphone in the current first time period, where the first time period is a time period when the mobile terminal is in a hands-free call state and no downlink voice signal is detected.
The determining unit 420 is configured to determine the strength of the first voice signal and the second voice signal.
In one form, the intensity is an average intensity of signal amplitudes over the first time period.
A signal processing unit 430, configured to use the voice signal with the higher intensity in the first voice signal and the second voice signal as a main input signal for microphone noise reduction processing output in the current first time period, and use the voice signal with the lower intensity in the first voice signal and the second voice signal as a sub-input signal for microphone noise reduction processing output in the current first time period.
Referring to fig. 5, a speech signal processing apparatus 500 provided by the present application operates in a mobile terminal, where the mobile terminal includes a terminal main body, a main microphone, an auxiliary microphone and a speaker, the main microphone and the speaker are both disposed at a first end of the terminal main body, and the auxiliary microphone is disposed at a second end of the terminal main body, opposite to the first end. The apparatus 500 comprises: a voice signal acquiring unit 510, a judging unit 520 and a signal processing unit 530.
A voice signal obtaining unit 510, configured to obtain a first voice signal received by the primary microphone in a current first time period, and a second voice signal received by the secondary microphone in the current first time period, where the first time period is a time period when the mobile terminal is in a hands-free call state and no downlink voice signal is detected.
The determining unit 520 is configured to determine the strength of the first voice signal and the second voice signal.
A signal processing unit 530, configured to use the voice signal with the greater intensity in the first voice signal and the second voice signal as the main input signal for the microphone noise reduction processing output in the current first time period, and use the voice signal with the lesser intensity in the first voice signal and the second voice signal as the auxiliary input signal for the microphone noise reduction processing output in the current first time period.
As one mode, the signal processing unit 530 is specifically configured to determine to obtain one of the first voice signal and the second voice signal with a higher intensity; based on the main input voice signal determined in the previous first time period, adjusting the intensity of the voice signal with larger intensity, so that the intensity difference value between the main input voice signal determined in the previous first time period and the voice signal with larger intensity meets a preset value; and taking the adjusted voice signal as a main input signal which is output in the current first time period and is used for microphone noise reduction processing.
As one mode, the signal processing unit 530 is specifically configured to obtain one of the first voice signal and the second voice signal that is determined to have a higher intensity; judging whether the intensity of the voice signal with the larger intensity is larger than a preset intensity value or not; and when the intensity of the voice signal with the larger intensity is larger than the preset intensity value, taking the voice signal with the larger intensity as a main input signal for the noise reduction processing of the microphone output in the current first time period.
Referring to fig. 6, a speech signal processing apparatus 600 provided by the present application operates in a mobile terminal, where the mobile terminal includes a terminal main body, a main microphone, an auxiliary microphone and a speaker, the main microphone and the speaker are both disposed at a first end of the terminal main body, and the auxiliary microphone is disposed at a second end of the terminal main body, opposite to the first end. The apparatus 600 comprises: a voice signal acquisition unit 610, a determination unit 620, a determination result acquisition unit 630, a posture determination unit 640, and a signal processing unit 650.
A voice signal acquiring unit 610, configured to acquire a first voice signal received by the primary microphone in a current first time period, and a second voice signal received by the secondary microphone in the current first time period, where the first time period is a time period when the mobile terminal is in a hands-free call state and no downlink voice signal is detected.
A determining unit 620, configured to determine the strength of the first voice signal and the second voice signal.
A determining result obtaining unit 630, configured to obtain the determined voice signal with the higher intensity in the first voice signal and the second voice signal.
And the gesture judging unit 640 is configured to judge whether the mobile terminal is in a preset gesture, wherein when the mobile terminal is in the preset gesture, a microphone receiving the voice signal with the higher intensity is closer to a user.
A signal processing unit 650, configured to, when the mobile terminal is in the preset posture, use the voice signal with the higher intensity as a main input signal for microphone noise reduction processing output in the current first time period, and use the voice signal with the lower intensity in the first voice signal and the second voice signal as a sub-input signal for microphone noise reduction processing output in the current first time period.
In summary, after acquiring the first voice signal received by the main microphone in the current first time period and the second voice signal received by the auxiliary microphone in the current first time period, comparing the intensities of the first voice signal and the second voice signal, using the voice signal with higher intensity as the main input signal for the noise reduction processing of the microphone output in the current first time period, and using the voice signal with lower intensity as the auxiliary input signal for the noise reduction processing of the microphone output in the current first time period, so that the signal intensity of the main input is greater than the signal intensity of the auxiliary input in the noise reduction processing of the microphone, so that the user speaks to any microphone, and the sound of the uplink voice signal obtained after the noise reduction processing of the microphone has higher intensity, and then the quality of the uplink voice is improved while the noise reduction effect of the noise reduction processing of the microphone is ensured.
A mobile terminal provided by the present application will be described with reference to fig. 7-9.
Referring to fig. 7, based on the foregoing voice signal processing method and apparatus, an embodiment of the present invention further provides a mobile terminal 100, which includes an electronic body 10, where the electronic body 10 includes a housing 12 and a main display 120 disposed on the housing 12. The housing 12 may be made of metal, such as steel or aluminum alloy. In this embodiment, the main display 120 generally includes a display panel 111, and may also include a circuit or the like for responding to a touch operation performed on the display panel 111. The Display panel 111 may be a Liquid Crystal Display (LCD) panel, and in some embodiments, the Display panel 111 is a touch screen 109.
As shown in fig. 8, in an actual application scenario, the mobile terminal 100 may be used as a smartphone terminal, in which case the electronic body 10 generally further includes one or more processors 102 (only one is shown in the figure), a memory 104, an RF (Radio Frequency) module 106, an audio circuit 110, a sensor 114, an input module 118, and a power module 122. It will be understood by those skilled in the art that the present application is not intended to be limited to the configuration of the electronics body portion 10. For example, the electronics body section 10 may include more or fewer components than shown, or have a different configuration than shown.
Those skilled in the art will appreciate that all other components are peripheral devices with respect to the processor 102, and the processor 102 is coupled to the peripheral devices through a plurality of peripheral interfaces 124. The peripheral interface 124 may be implemented based on the following criteria: universal Asynchronous Receiver/Transmitter (UART), General Purpose Input/Output (GPIO), Serial Peripheral Interface (SPI), and Inter-Integrated Circuit (I2C), but the present invention is not limited to these standards. In some examples, the peripheral interface 124 may comprise only a bus; in other examples, the peripheral interface 124 may also include other elements, such as one or more controllers, for example, a display controller for interfacing with the display panel 111 or a memory controller for interfacing with a memory. These controllers may also be separate from the peripheral interface 124 and integrated within the processor 102 or a corresponding peripheral.
The memory 104 may be used to store software programs and modules, and the processor 102 executes various functional applications and data processing by executing the software programs and modules stored in the memory 104. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the electronic body portion 10 or the primary display 120 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The RF module 106 is configured to receive and transmit electromagnetic waves, and achieve interconversion between the electromagnetic waves and electrical signals, so as to communicate with a communication network or other devices. The RF module 106 may include various existing circuit elements for performing these functions, such as an antenna, a radio frequency transceiver, a digital signal processor, an encryption/decryption chip, a Subscriber Identity Module (SIM) card, memory, and so forth. The RF module 106 may communicate with various networks such as the internet, an intranet, a wireless network, or with other devices via a wireless network. The wireless network may comprise a cellular telephone network, a wireless local area network, or a metropolitan area network. The Wireless network may use various Communication standards, protocols, and technologies, including, but not limited to, Global System for Mobile Communication (GSM), Enhanced Mobile Communication (Enhanced Data GSM Environment, EDGE), wideband Code division multiple Access (W-CDMA), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Wireless Fidelity (WiFi) (e.g., IEEE802.1 a, IEEE802.11 b, IEEE802.1 g, and/or IEEE802.11 n), Voice over internet protocol (VoIP), world wide mail Access (Microwave for Wireless communications, wimax), and any other suitable protocol for instant messaging, and may even include those protocols that have not yet been developed.
The audio circuitry 110, speaker 101, sound jack 103, primary microphone 105, secondary microphone 115 collectively provide an audio interface between a user and the electronic body portion 10 or the primary display 120. Specifically, the audio circuit 110 receives sound data from the processor 102, converts the sound data into an electrical signal, and transmits the electrical signal to the speaker 101. The speaker 101 converts an electric signal into a sound wave audible to the human ear. The audio circuitry 110 also receives electrical signals from the primary microphone 105 and the secondary microphone 115, converts the electrical signals to sound data, and transmits the sound data to the processor 102 for further processing. Audio data may be retrieved from the memory 104 or through the RF module 106. In addition, audio data may also be stored in the memory 104 or transmitted through the RF module 106.
Wherein, as shown in fig. 9, the primary microphone 105 may be disposed near the speaker 101 at the bottom of the mobile terminal, and the secondary microphone 115 may be disposed near the top or side of the mobile terminal or near the back camera. It is to be understood that, in the present application, the end at which the main microphone 105 is disposed is a first end of the mobile terminal 100, and the other end opposite thereto is a second end, as one way.
The sensor 114 is disposed in the electronic body portion 10 or the main display 120, examples of the sensor 114 include, but are not limited to: light sensors, operational sensors, pressure sensors, infrared heat sensors, distance sensors, gravitational acceleration sensors, and other sensors.
Specifically, the light sensors may include a light sensor 114F, a pressure sensor 114G. Among them, the pressure sensor 114G may detect a pressure generated by pressing on the mobile terminal 100. That is, the pressure sensor 114G detects pressure generated by contact or pressing between the user and the mobile terminal, for example, contact or pressing between the user's ear and the mobile terminal. Accordingly, the pressure sensor 114G may be used to determine whether contact or pressing has occurred between the user and the mobile terminal 100, as well as the magnitude of the pressure.
Referring to fig. 8 again, in the embodiment shown in fig. 8, the light sensor 114F and the pressure sensor 114G are disposed adjacent to the display panel 111. The light sensor 114F may turn off the display output when an object is near the main display 120, for example, when the electronic body portion 10 moves to the ear.
As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in various directions (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used for applications (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping) and the like for recognizing the attitude of the mobile terminal 100. In addition, the electronic body 10 may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer and a thermometer, which are not described herein,
in this embodiment, the input module 118 may include the touch screen 109 disposed on the main display 120, and the touch screen 109 may collect touch operations of the user (for example, operations of the user on or near the touch screen 109 using any suitable object or accessory such as a finger, a stylus, etc.) and drive the corresponding connection device according to a preset program. Optionally, the touch screen 109 may include a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 102, and can receive and execute commands sent by the processor 102. In addition, the touch detection function of the touch screen 109 may be implemented by various types, such as resistive, capacitive, infrared, and surface acoustic wave. In addition to the touch screen 109, in other variations, the input module 118 may include other input devices, such as keys 107. The keys 107 may include, for example, character keys for inputting characters, and control keys for activating control functions. Examples of such control keys include a "back to home" key, a power on/off key, and the like.
The main display 120 is used to display information input by a user, information provided to the user, and various graphic user interfaces of the electronic body section 10, which may be composed of graphics, text, icons, numbers, video, and any combination thereof, and in one example, the touch screen 109 may be provided on the display panel 111 so as to be integrated with the display panel 111.
The power module 122 is used to provide power supply to the processor 102 and other components. Specifically, the power module 122 may include a power management system, one or more power sources (e.g., batteries or ac power), a charging circuit, a power failure detection circuit, an inverter, a power status indicator light, and any other components associated with the generation, management, and distribution of power within the electronic body portion 10 or the primary display 120.
The mobile terminal 100 further comprises a locator 119, the locator 119 being configured to determine an actual location of the mobile terminal 100. In this embodiment, the locator 119 implements the positioning of the mobile terminal 100 by using a positioning service, which is understood to be a technology or a service for obtaining the position information (e.g., longitude and latitude coordinates) of the mobile terminal 100 by using a specific positioning technology and marking the position of the positioned object on an electronic map.
It should be understood that the mobile terminal 100 described above is not limited to a smartphone terminal, but it should refer to a computer device that can be used in mobility. Specifically, the mobile terminal 100 refers to a mobile computer device equipped with an intelligent operating system, and the mobile terminal 100 includes, but is not limited to, a smart phone, a smart watch, a tablet computer, and the like.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (mobile terminal) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments. In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (5)

1. A voice signal processing method is applied to a mobile terminal, the mobile terminal comprises a terminal main body, a main microphone, an auxiliary microphone and a loudspeaker, the main microphone and the loudspeaker are both arranged at a first end of the terminal main body, and the auxiliary microphone is arranged at a second end, opposite to the first end, of the terminal main body; the method comprises the following steps:
acquiring a first voice signal received by the main microphone in a current first time period and a second voice signal received by the auxiliary microphone in the current first time period, wherein the first time period is a time period when the mobile terminal is in a hands-free call state and no downlink voice signal is detected;
judging the strength of the first voice signal and the second voice signal;
acquiring one voice signal with higher strength in the first voice signal and the second voice signal which are judged;
judging whether the mobile terminal is in a preset posture, wherein when the mobile terminal is in the preset posture, a microphone receiving the voice signal with the higher intensity is closer to a user;
when the mobile terminal is in the preset posture, taking the voice signal with the higher intensity as a main input signal for microphone noise reduction processing output in the current first time period, and taking the voice signal with the lower intensity in the first voice signal and the second voice signal as an auxiliary input signal for microphone noise reduction processing output in the current first time period;
and when the mobile terminal is not in the preset posture, ending the voice signal processing flow.
2. The method of claim 1, wherein the intensity is an average intensity of signal amplitudes over the first time period.
3. A voice signal processing device is characterized by being operated in a mobile terminal, wherein the mobile terminal comprises a terminal main body, a main microphone, an auxiliary microphone and a loudspeaker, the main microphone and the loudspeaker are both arranged at a first end of the terminal main body, and the auxiliary microphone is arranged at a second end, opposite to the first end, of the terminal main body; the device comprises:
a voice signal acquiring unit, configured to acquire a first voice signal received by the primary microphone in a current first time period, and a second voice signal received by the secondary microphone in the current first time period, where the first time period is a time period when the mobile terminal is in a hands-free call state and no downlink voice signal is detected;
the judging unit is used for judging the strength of the first voice signal and the second voice signal;
a gesture determining unit for determining whether the mobile terminal is in a preset gesture, wherein when the mobile terminal is in the preset gesture, the microphone receiving the voice signal with the higher intensity is closer to the user
A signal processing unit, configured to obtain a voice signal with a higher intensity from the first voice signal and the second voice signal that are determined, and when the mobile terminal is in the preset posture, use the voice signal with the higher intensity as a main input signal for microphone noise reduction processing output in the current first time period, and use a voice signal with a lower intensity from the first voice signal and the second voice signal as a sub-input signal for microphone noise reduction processing output in the current first time period; and when the mobile terminal is not in the preset posture, ending the voice signal processing flow.
4. A mobile terminal comprising one or more processors and memory;
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of any of claims 1-2.
5. A computer-readable storage medium, comprising a stored program, wherein the method of any of claims 1-2 is performed when the program is run.
CN201711140814.5A 2017-11-16 2017-11-16 Voice signal processing method and device and mobile terminal Active CN107742523B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711140814.5A CN107742523B (en) 2017-11-16 2017-11-16 Voice signal processing method and device and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711140814.5A CN107742523B (en) 2017-11-16 2017-11-16 Voice signal processing method and device and mobile terminal

Publications (2)

Publication Number Publication Date
CN107742523A CN107742523A (en) 2018-02-27
CN107742523B true CN107742523B (en) 2022-01-07

Family

ID=61233383

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711140814.5A Active CN107742523B (en) 2017-11-16 2017-11-16 Voice signal processing method and device and mobile terminal

Country Status (1)

Country Link
CN (1) CN107742523B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110491376B (en) * 2018-05-11 2022-05-10 北京国双科技有限公司 Voice processing method and device
CN109151211B (en) * 2018-09-30 2022-01-11 Oppo广东移动通信有限公司 Voice processing method and device and electronic equipment
CN111243611B (en) * 2018-11-29 2022-12-27 北京小米松果电子有限公司 Microphone wind noise elimination method and device, storage medium and mobile terminal
CN109785855B (en) * 2019-01-31 2022-01-28 秒针信息技术有限公司 Voice processing method and device, storage medium and processor
CN110189762A (en) * 2019-05-28 2019-08-30 晶晨半导体(上海)股份有限公司 Obtain the method and device of voice signal
CN110428806B (en) * 2019-06-03 2023-02-24 交互未来(北京)科技有限公司 Microphone signal based voice interaction wake-up electronic device, method, and medium
CN112769979B (en) * 2019-11-04 2023-05-05 深圳市万普拉斯科技有限公司 Voice call method and device based on terminal, computer equipment and storage medium
CN110827845B (en) * 2019-11-18 2022-04-22 西安闻泰电子科技有限公司 Recording method, device, equipment and storage medium
CN110931019B (en) * 2019-12-06 2022-06-21 广州国音智能科技有限公司 Public security voice data acquisition method, device, equipment and computer storage medium
CN111970410B (en) * 2020-08-26 2021-11-19 展讯通信(上海)有限公司 Echo cancellation method and device, storage medium and terminal
CN113539284B (en) * 2021-06-03 2023-12-29 深圳市发掘科技有限公司 Voice noise reduction method and device, computer equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1801325A (en) * 2004-12-31 2006-07-12 北京中星微电子有限公司 Automatic gain control method for digital audio frequency
CN104538040A (en) * 2014-11-28 2015-04-22 广东欧珀移动通信有限公司 Method and device for dynamically selecting communication voice signals
CN104702787A (en) * 2015-03-12 2015-06-10 深圳市欧珀通信软件有限公司 Sound acquisition method applied to MT (Mobile Terminal) and MT
WO2016181752A1 (en) * 2015-05-12 2016-11-17 日本電気株式会社 Signal processing device, signal processing method, and signal processing program
CN106657508A (en) * 2016-11-30 2017-05-10 深圳天珑无线科技有限公司 Terminal accessory and terminal component for realizing dual-MIC noise reduction

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2542586C2 (en) * 2009-11-24 2015-02-20 Нокиа Корпорейшн Audio signal processing device
US9330675B2 (en) * 2010-11-12 2016-05-03 Broadcom Corporation Method and apparatus for wind noise detection and suppression using multiple microphones
US9031259B2 (en) * 2011-09-15 2015-05-12 JVC Kenwood Corporation Noise reduction apparatus, audio input apparatus, wireless communication apparatus, and noise reduction method
CN103079148B (en) * 2012-12-28 2018-05-04 中兴通讯股份有限公司 A kind of method and device of terminal dual microphone noise reduction
CN105162950B (en) * 2015-07-08 2020-09-25 Tcl移动通信科技(宁波)有限公司 Mobile terminal and method for switching microphones in call
JP6536320B2 (en) * 2015-09-28 2019-07-03 富士通株式会社 Audio signal processing device, audio signal processing method and program
CN106941549A (en) * 2017-04-28 2017-07-11 苏州科技大学 The communication device for mobile phone and its processing method of a kind of dual microphone noise reduction

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1801325A (en) * 2004-12-31 2006-07-12 北京中星微电子有限公司 Automatic gain control method for digital audio frequency
CN104538040A (en) * 2014-11-28 2015-04-22 广东欧珀移动通信有限公司 Method and device for dynamically selecting communication voice signals
CN104702787A (en) * 2015-03-12 2015-06-10 深圳市欧珀通信软件有限公司 Sound acquisition method applied to MT (Mobile Terminal) and MT
WO2016181752A1 (en) * 2015-05-12 2016-11-17 日本電気株式会社 Signal processing device, signal processing method, and signal processing program
CN106657508A (en) * 2016-11-30 2017-05-10 深圳天珑无线科技有限公司 Terminal accessory and terminal component for realizing dual-MIC noise reduction

Also Published As

Publication number Publication date
CN107742523A (en) 2018-02-27

Similar Documents

Publication Publication Date Title
CN107742523B (en) Voice signal processing method and device and mobile terminal
CN107464557B (en) Call recording method and device, mobile terminal and storage medium
CN108684029B (en) Bluetooth pairing connection method and system, Bluetooth device and terminal
CN108777731B (en) Key configuration method and device, mobile terminal and storage medium
CN109189362B (en) Sound production control method and device, electronic equipment and storage medium
CN106940997B (en) Method and device for sending voice signal to voice recognition system
CN108810198B (en) Sound production control method and device, electronic device and computer readable medium
CN108521501B (en) Voice input method, mobile terminal and computer readable storage medium
CN109817241B (en) Audio processing method, device and storage medium
CN111477243B (en) Audio signal processing method and electronic equipment
CN109739394B (en) SAR value processing method and mobile terminal
CN109189360B (en) Screen sounding control method and device and electronic device
CN108492837B (en) Method, device and storage medium for detecting audio burst white noise
CN111093137B (en) Volume control method, volume control equipment and computer readable storage medium
CN111246061B (en) Mobile terminal, method for detecting shooting mode and storage medium
CN111182118B (en) Volume adjusting method and electronic equipment
CN108769364B (en) Call control method, device, mobile terminal and computer readable medium
CN109062533B (en) Sound production control method, sound production control device, electronic device, and storage medium
CN108449787B (en) Connection control method and device and electronic equipment
CN107728990B (en) Audio playing method, mobile terminal and computer readable storage medium
CN108920224A (en) Dialog information processing method, device, mobile terminal and storage medium
CN109032008B (en) Sound production control method and device and electronic device
CN108958631B (en) Screen sounding control method and device and electronic device
CN109144461B (en) Sound production control method and device, electronic device and computer readable medium
CN107734147B (en) Step recording method, mobile terminal and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant