CN114205701B - Noise reduction method, terminal device and computer readable storage medium - Google Patents

Noise reduction method, terminal device and computer readable storage medium Download PDF

Info

Publication number
CN114205701B
CN114205701B CN202010981457.0A CN202010981457A CN114205701B CN 114205701 B CN114205701 B CN 114205701B CN 202010981457 A CN202010981457 A CN 202010981457A CN 114205701 B CN114205701 B CN 114205701B
Authority
CN
China
Prior art keywords
terminal equipment
user
ear
transfer function
sound
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010981457.0A
Other languages
Chinese (zh)
Other versions
CN114205701A (en
Inventor
王文东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010981457.0A priority Critical patent/CN114205701B/en
Priority to PCT/CN2021/102907 priority patent/WO2022057365A1/en
Publication of CN114205701A publication Critical patent/CN114205701A/en
Application granted granted Critical
Publication of CN114205701B publication Critical patent/CN114205701B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1083Reduction of ambient noise
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K11/00Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/16Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/175Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
    • G10K11/178Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2460/00Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
    • H04R2460/01Hearing devices using active noise cancellation

Abstract

The embodiment of the invention discloses a noise reduction method, terminal equipment and a computer readable storage medium, which are used for carrying out a noise reduction processing process on the terminal equipment, and a user can experience a certain degree of noise reduction effect without extra cost. The method provided by the embodiment of the invention comprises the following steps: the terminal equipment collects sound signals; the terminal equipment acquires a first sound transfer function between the ear of a user and the terminal equipment; the terminal equipment processes the sound signal according to the first sound transfer function to obtain an ear noise signal; and the terminal equipment performs noise reduction processing on the output signal of the currently used earphone according to the ear noise signal.

Description

Noise reduction method, terminal device and computer readable storage medium
Technical Field
The present invention relates to the field of audio, and in particular, to a noise reduction method, a terminal device, and a computer-readable storage medium.
Background
When using earphones to enjoy music or movies, how to enhance the listening experience in noisy environments has been a problem of great concern to users. Currently, an Active Noise Cancellation (ANC) function has been widely applied to an earphone. In order to realize active noise reduction, a microphone and a processing circuit need to be arranged on an earphone, the microphone is usually arranged on an earphone shell to collect environmental noise, and in some active noise reduction earphones, another microphone is arranged in the earphone close to a loudspeaker to collect sound between the earphone and human ears. The signals collected by the microphone are processed through an algorithm to estimate the environmental noise reaching the ear, and then the noise reduction effect is achieved by superposing the phase-inverted signal of the estimated environmental noise in the ear on the output signal of the earphone.
However, the performance of the processor on the earphone is limited, and an algorithm with higher complexity cannot be adopted, so that the noise reduction effect under many complex conditions is not ideal. The microphone on the earphone has larger error in positioning the sound source, so the noise reduction effect on the mobile noise source is not ideal. And noise reduction earphones are relatively high in cost and high in selling price.
Disclosure of Invention
The embodiment of the invention provides a noise reduction method, terminal equipment and a computer readable storage medium, which are used for performing noise reduction processing on an earphone on the terminal equipment.
In view of this, a first aspect of the present invention provides a noise reduction method, which may include:
the terminal equipment collects sound signals;
the terminal equipment acquires a first sound transfer function between the ear of a user and the terminal equipment;
the terminal equipment processes the sound signal according to the first sound transfer function to obtain an ear noise signal;
and the terminal equipment performs noise reduction processing on the output signal of the currently used earphone according to the ear noise signal.
A second aspect of the present invention provides a terminal device, which may include:
the acquisition module is used for acquiring sound signals;
the processing module is used for acquiring a first sound transfer function between the ear of the user and the terminal equipment; processing the sound signal according to the first sound transfer function to obtain an ear noise signal; and according to the ear noise signal, carrying out noise reduction processing on an output signal of the currently used earphone.
A third aspect of the present invention provides a terminal device, which may include:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory for performing the method according to the first aspect of the embodiment of the present invention.
A fourth aspect of the embodiments of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method according to the first aspect of the embodiments of the present invention.
A fifth aspect of the embodiments of the present invention discloses a computer program product, which, when running on a computer, causes the computer to execute the method of the first aspect of the embodiments of the present invention.
A sixth aspect of the present embodiment discloses an application publishing platform, where the application publishing platform is configured to publish a computer program product, and when the computer program product runs on a computer, the computer is caused to execute the method according to the first aspect of the present embodiment.
According to the technical scheme, the embodiment of the invention has the following advantages:
in the embodiment of the invention, the terminal equipment collects the sound signal; the terminal equipment acquires a first sound transfer function between the ear of a user and the terminal equipment; the terminal equipment processes the sound signal according to the first sound transfer function to obtain an ear noise signal; and the terminal equipment performs noise reduction processing on the output signal of the currently used earphone according to the ear noise signal. The noise reduction processing process is carried out on the terminal equipment, and a user can experience a noise reduction effect to a certain degree without extra cost. The method comprises the steps of obtaining estimation of ear noise signals after sound signals received by terminal equipment pass through a first sound transfer function, and then superposing opposite-phase signals of the ear noise signals on output signals of earphones to achieve the noise reduction effect. Moreover, the processing performance of the terminal equipment is superior to that of an earphone, and the noise reduction effect is better.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the following briefly introduces the embodiments and the drawings used in the description of the prior art, and obviously, the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained according to the drawings.
Fig. 1A is a schematic diagram of an embodiment of a terminal device in the embodiment of the present invention;
fig. 1B is a schematic diagram of another embodiment of the terminal device in the embodiment of the present invention;
FIG. 2 is a schematic diagram of an embodiment of a noise reduction method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of another embodiment of a noise reduction method in an embodiment of the invention;
fig. 4A is a schematic flowchart of registration performed by a terminal device in an embodiment of the present invention;
FIG. 4B is a schematic diagram of a 3D model of a human face according to an embodiment of the present invention;
FIG. 4C is another schematic diagram of a 3D model of a human face according to an embodiment of the invention;
fig. 4D is a schematic diagram of the terminal device acquiring the type of the headset in the embodiment of the present invention;
fig. 4E is another schematic diagram of the terminal device acquiring the model of the earphone in the embodiment of the present invention;
fig. 4F is another schematic diagram of the terminal device acquiring the model of the headset in the embodiment of the present invention;
fig. 4G is a schematic diagram of the terminal device in the second play mode according to the embodiment of the present invention;
FIG. 5 is a schematic diagram of another embodiment of a noise reduction method in an embodiment of the present invention;
fig. 6A is a schematic diagram of another embodiment of the terminal device in the embodiment of the present invention;
fig. 6B is a schematic diagram of another embodiment of the terminal device in the embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a noise reduction method, terminal equipment and a computer readable storage medium, which are used for performing noise reduction processing on an earphone on the terminal equipment.
In order to make those skilled in the art better understand the technical solutions of the present invention, the technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The embodiments based on the present invention should fall into the protection scope of the present invention.
It is understood that the terminal device according to the embodiments of the present invention may include a general handheld electronic terminal having a function of connecting a wired headset. Such as a mobile phone, a smart phone, a portable terminal, a Personal Digital Assistant (PDA), a Portable Multimedia Player (PMP) device, a notebook computer, a notebook (Note Pad), a Wireless Broadband (Wibro) terminal, a tablet computer (PC), a smart PC, a POS (Point of Sales terminal), and a car-mounted computer.
The terminal device may also comprise a wearable device. The wearable device may be worn directly on the user or may be a portable electronic device integrated into the user's clothing or accessories. Wearable equipment is not only a hardware equipment, can realize powerful intelligent function through software support and data interaction, high in the clouds interaction more, for example: the system has the functions of calculation, positioning and alarming, and can be connected with a mobile phone and various terminals. Wearable devices may include, but are not limited to, wrist-supported watch types (e.g., wrist watches, wrist-supported products), foot-supported shoes types (e.g., shoes, socks, or other leg-worn products), head-supported Glass types (e.g., glasses, helmets, headbands, etc.), and various types of non-mainstream products such as smart clothing, bags, crutches, accessories, and the like.
Fig. 1A is a schematic diagram of an embodiment of a terminal device according to an embodiment of the present invention. More or less functional modules than those shown below may be included:
face recognition, face 3D modeling, face and binaural features, headset model recognition, headset model selection, headset sound insulation characteristics, distance estimation, sound source localization, binaural filtering and downmix, inertial measurement unit, collision detection, headset characteristic filtering, and audio mixing.
Optionally, the distance estimation module shown in fig. 1A may also be implemented by an ultrasonic ranging module, as shown in fig. 1B, which is a schematic diagram of another embodiment of the terminal device in the embodiment of the present invention. It should be noted that, regarding the functions of each functional module shown in fig. 1A and 1B, the following method embodiments will be described.
The following describes the technical solution of the present invention by way of example with reference to the terminal device shown in fig. 1A or fig. 1B. As shown in fig. 2, which is a schematic diagram of an embodiment of a noise reduction method in an embodiment of the present invention, the noise reduction method may include:
201. the terminal equipment collects sound signals.
Optionally, the terminal device collects the sound signal through a microphone.
Optionally, the terminal device collects a plurality of sound signals through a microphone array (i.e. a plurality of microphones). The sound signal may be understood as an ambient noise signal, among others.
202. The terminal equipment acquires a first sound transfer function between the ear of the user and the terminal equipment.
Optionally, the obtaining, by the terminal device, a first sound transfer function between the ear of the user and the terminal device may include, but is not limited to, the following implementation manners:
mode 1: the method comprises the steps that when the terminal equipment detects that the terminal equipment is in a first playing mode at present, a first coordinate of a feature point in a user face relative to the terminal equipment is obtained, and the first playing mode is a playing mode of the user face facing the terminal equipment; the terminal equipment obtains the position of a noise source according to the sound signal; and the terminal equipment calculates to obtain a first sound transfer function between the ear of the user and the terminal equipment according to the first coordinate and the position of the noise source.
Mode 2: the method comprises the steps that the terminal equipment receives a user using gesture and a terminal equipment placing position input by a user under the condition that the terminal equipment is detected to be in a second playing mode, wherein the second playing mode is a playing mode that the face of the user does not face the terminal equipment; the terminal equipment determines a first coordinate of a feature point in the user face relative to the terminal equipment according to the user using gesture and the position where the terminal equipment is placed; the terminal equipment obtains the position of a noise source according to the sound signal; and the terminal equipment calculates to obtain a first sound transfer function between the ear of the user and the terminal equipment according to the first coordinate and the position of the noise source.
It should be noted that, in modes 1 and 2, the first coordinate of the feature point in the user face relative to the terminal device here may be understood as a first coordinate of the feature point in the user face relative to a center point of the terminal device, or may be understood as a first coordinate of the feature point in the user face relative to each microphone in the terminal device, which is not limited here specifically. The first sound transfer function between the ear of the user and the terminal device may be a first sound transfer function between a single ear of the user and the terminal device, or may be a first sound transfer function between two ears of the user and the terminal device.
Optionally, in the mode 1 and the mode 2, the method may further include: the terminal equipment obtains the current inclination angle of the terminal equipment through an inertia measurement unit; wherein, terminal equipment calculates according to first coordinate to and noise source position and obtains the first sound transfer function between user's ear and the terminal equipment, can include: and the terminal equipment calculates to obtain a first sound transfer function between the ear of the user and the terminal equipment according to the first coordinate, the position of the noise source and the current inclination angle of the terminal equipment.
By way of example, an Inertial Measurement Unit (IMU) may include, but is not limited to: gravity sensors, acceleration sensors and gyroscopes. The terminal equipment can continuously acquire the reading of the inertia measurement unit to obtain the current inclination angle of the terminal equipment. It will be appreciated that the current tilt of the terminal device is an estimate.
203. And the terminal equipment processes the sound signal according to the first sound transfer function to obtain an ear noise signal.
Optionally, the method may further include: the terminal equipment acquires a second sound transfer function between the currently used earphone shell and the ear of a user; the terminal device processes the sound signal according to the first sound transfer function to obtain an ear noise signal, which may include: and the terminal equipment processes the sound signal according to the first sound transfer function and the second sound transfer function to obtain an ear noise signal.
204. And the terminal equipment performs noise reduction processing on the output signal of the currently used earphone according to the ear noise signal.
The terminal device performs noise reduction processing on the output signal of the currently used earphone according to the ear noise signal, and the noise reduction processing may include: the terminal equipment performs phase inversion processing on the ear noise signal to obtain an ear noise signal subjected to phase inversion processing; the terminal device superimposes the ear noise signal processed in the opposite phase on the output signal to the currently used earphone. It is to be understood that the ear noise signal may be a monaural noise signal or may be a binaural noise signal.
Optionally, the processing, by the terminal device, the sound signal according to the first sound transfer function and the second sound transfer function to obtain the ear noise signal may include: the terminal equipment carries out first filtering and down-mixing processing on the sound signal according to the first sound transfer function to obtain an out-of-ear noise signal; the terminal equipment performs second filtering processing on the noise signal outside the ear according to the second sound transfer function to obtain the noise signal inside the ear; the terminal device performs noise reduction processing on the output signal of the currently used earphone according to the ear noise signal, and the noise reduction processing may include: and the terminal equipment performs noise reduction processing on the output signal of the currently used earphone according to the in-ear noise signal.
It will be appreciated that each sound signal corresponds to one first sound transfer function if it is monaural, and two first sound transfer functions if it is binaural; here, taking monaural as an example, a plurality of sound signals are filtered according to the first sound transfer function to obtain a plurality of filtered sound signals, and the plurality of filtered sound signals are down-mixed to obtain an extra-aural noise signal.
Optionally, the first sound transfer function may also be understood as a first mapping relationship, where the first mapping relationship represents sound transfer characteristics between the user ear and the terminal device; a second sound transfer function may also be understood as a second mapping that characterizes the sound transfer between the current use of the headphone housing by the user and the inside of the user's ear.
In the embodiment of the invention, the noise reduction processing process is carried out on the terminal equipment, namely, the noise reduction processing can be carried out on the earphone on the terminal equipment. The method comprises the steps of obtaining estimation of ear noise signals after sound signals received by terminal equipment pass through a first sound transfer function, and achieving noise reduction effect by superposing opposite phase signals of the ear noise signals on output signals of earphones. The user can experience a certain degree of noise reduction effect without additional cost. And the processing performance of the terminal equipment is superior to that of an earphone, and the noise reduction effect is better.
The following description is made for the terminal device in a first play mode and a second play mode respectively, and as shown in fig. 3, the schematic diagram of another embodiment of the noise reduction method in the embodiment of the present invention may include:
301. the terminal equipment collects sound signals.
It can be understood that, before the terminal device performs step 301, a registration process may also be performed, as shown in fig. 4A, which is a schematic flowchart of a process of registering a terminal device in an embodiment of the present invention, and the process may include: 401. the method comprises the steps that a terminal device collects a front image of a user; 402. the terminal equipment calls a face recognition library according to the front image of the user to obtain the identity of the user; 403. the method comprises the steps that terminal equipment collects head rotation images of a user; 404. and the terminal equipment performs modeling of a human face 3D model of the user according to the head rotation image of the user. Optionally, the terminal device may establish 3D models of faces of multiple users. Fig. 4B is a schematic diagram of a 3D face model according to an embodiment of the present invention. Fig. 4C is another schematic diagram of the 3D face model according to the embodiment of the present invention.
Optionally, 405, the terminal device outputs a first prompt message; and the terminal equipment responds to the operation of the user on the first prompt message and acquires an earphone model list used by the user. Optionally, the terminal device may obtain a list of headset models used by multiple users.
Optionally, 406, the terminal device stores the user's identity, the 3D face model, and a list of headset models used by the user.
Optionally, the first prompt message may prompt the user to input a model of each currently used headset, as shown in fig. 4D, which is a schematic diagram illustrating that the terminal device obtains the model of the headset in the embodiment of the present invention. The user may also be prompted to select a model of each earphone currently used by the user from a preset earphone model list, as shown in fig. 4E, another schematic diagram of acquiring the model of the earphone for the terminal device in the embodiment of the present invention is shown. The user may also be prompted to determine the model of the headset currently used by the user through an image analysis method, as shown in fig. 4F, which is another schematic diagram of acquiring the model of the headset for the terminal device in the embodiment of the present invention.
It can be understood that, by using the image analysis method, the first prompt message may be "do the camera turn on and take a picture to identify the model of the mobile phone? If the user selects "yes", then the terminal device responds to the operation of selecting "yes" by the user, starts a photographing or shooting mode, responds to the operation of photographing or shooting by the user, obtains a picture or a video about the use of the earphone by the user, and obtains the model of the earphone used by the user through the processing of image analysis by the terminal device.
For example, in the registration process shown in fig. 4A, first, the terminal device may collect a front image of the user, and perform face recognition to obtain the identity of the user. Subsequently, the user can turn the head to one side, then slowly rotate 180 degrees to the other side, and the terminal equipment can collect user images in real time, detect feature points and model the human face 3D model. Then, the terminal device may prompt the user to select the model of all the earphones used by the user from the preset earphone model list. If the earphone model used by the user is not in the preset earphone model list, the terminal equipment can collect the earphone image and select the earphone model closest to the earphone appearance of the earphone image from the online earphone image database, or the terminal equipment reminds the user to manually input the earphone model. Finally, the terminal device can store the established face 3D model, the list of models of earphones used by the user, and the user's identification in a storage area of the terminal device.
Optionally, the terminal device collects the sound signal through a microphone array. That is, the terminal device collects a plurality of sound signals through a plurality of microphones. Optionally, when the terminal device detects that the current user uses the earphone to connect the terminal device, the terminal device collects the sound signal.
Optionally, when the terminal device detects that the current user uses the earphone to connect the terminal device, the terminal device may further collect a front face image of the current user, and call a face recognition module of the system to obtain a current identity of the current user; and the terminal equipment reads the face 3D model of the current user from the storage area according to the current identity.
It can be understood that the face recognition module in the terminal device: the method is used for identifying the identity of a user and reading the corresponding 3D model of the face and the model of the earphone.
A face 3D modeling module in the terminal equipment: the method has the main functions of detecting feature points and estimating depth information in the process of acquiring 180-degree continuous images on the front of a user and establishing a human face 3D model. In the process of face 3D modeling, the embodiment of the invention mainly concerns the coordinates of the ears of the user relative to other feature points (such as the nose tip, the eye corners, the mouth corners and the like) of the face of the user, so that in the stored face 3D model, the feature points can be only reserved and other feature points are ignored to reduce the size of the model, thereby saving the storage space.
In the embodiment of the invention, the characteristic points and the ear characteristics in each user face can be modeled by utilizing the camera on the terminal equipment, and the first coordinate of the user face relative to the terminal equipment is detected in real time when the user holds the terminal equipment for use, so that the noise reduction effect is highly adapted to the characteristics and behaviors of the user.
302. The terminal equipment acquires a first coordinate of a feature point in the user face relative to the terminal equipment under the condition that the terminal equipment detects that the terminal equipment is in a first playing mode, wherein the first playing mode is a playing mode that the user face faces the terminal equipment.
For example, the first play mode may be a scene that a user faces a screen of the terminal device, plays a video, plays a game, or swipes a webpage while listening to a song, or swipes a webpage while listening to a broadcast. Optionally, in the first play mode, the user may hold the terminal device by hand.
Optionally, the obtaining, by the terminal device, the first coordinate of the feature point in the user interface with respect to the terminal device may include: the terminal equipment acquires a face image of the user through the camera for visual analysis, or acquires a first coordinate of a feature point in the face of the user relative to the terminal equipment through an ultrasonic ranging method.
Optionally, the first coordinate of the feature point in the user face with respect to the terminal device may be determined by a first distance of the feature point in the user face with respect to the terminal device.
It is understood that the distance estimation module in the terminal device: for estimating in real time a first distance of a feature point (e.g., nose tip, eye corner, mouth corner, etc.) in the user's face from the terminal device while the user is using the headset. Illustratively, when only a simple camera is used, the first distance between the characteristic point in the user face and the terminal device can be roughly acquired through the focal length information of the camera in combination with face detection. Optionally, a large part of the existing terminal equipment is configured with a structured light camera, a binocular camera or a Time of flight (ToF) camera, etc., so that the depth of field can be conveniently measured, and a more accurate first distance can be obtained.
It is understood that the distance estimation module in the terminal device may be replaced by an ultrasonic ranging module. I.e., a part for distance estimation using a camera and visual analysis when the user uses the headset, may also be replaced with the ultrasonic ranging method. Compared with a visual analysis method, the ultrasonic distance measurement method has the advantages of lower computational demand, lower power consumption, stronger adaptability to various illumination environments and more friendly privacy protection for users.
Optionally, the calculating, by the terminal device, a first sound transfer function between the user's face and the terminal device according to the first coordinate and the noise source position may include: according to the first coordinate and a second coordinate of the characteristic points in the preset user ear and user face, obtaining a third coordinate of the user ear relative to the terminal equipment; and calculating to obtain a first sound transfer function between the ear of the user and the terminal equipment according to the third coordinate and the position of the noise source. It will be appreciated that the second coordinates of the user's ear and the feature points in the user's face are obtained empirically, for example: the resulting empirical values are blended by the coordinates of the user's ears for a large number of adults relative to the feature points in the user's face.
Optionally, the second coordinate may be obtained by a preset second distance between the ear of the user and the feature point in the user interface; the third coordinate may be derived by a third distance of the user's ear relative to the terminal device.
Optionally, the method may further include: the terminal equipment obtains a fourth coordinate of the ear of the user relative to the feature point in the face of the user according to the face image of the user and a face 3D model preset by the user; the terminal device calculates a first sound transfer function between the ear of the user and the terminal device according to the first coordinate and the position of the noise source, and may include: obtaining a fifth coordinate of the ear of the user relative to the terminal equipment according to the first coordinate and the fourth coordinate; and the terminal equipment calculates to obtain a first sound transfer function between the ear of the user and the terminal equipment according to the fifth coordinate and the position of the noise source. It is understood that the face image of the user here may be a frontal face image of the user.
Optionally, the fourth coordinate may be obtained by a fourth distance between the ear of the user and the feature point in the user face; the fifth coordinate may be derived from a fifth distance of the user's ear relative to the terminal device.
It can be understood that the face 3D model module in the terminal device: and also for estimating a second distance of the user's ear from a feature point in the user's face.
And combining a human face 3D model module in the terminal equipment with a distance estimation module, and obtaining a third distance between the ear of the user and the terminal equipment according to the first distance between the feature points in the face of the user and the terminal equipment and the second distance between the ear of the user and the feature points in the face of the user.
Optionally, in a scene with a low precision requirement, the 3D face modeling module may also be replaced with a simple visual analysis module. For example, a visual analysis of the captured frontal image of the user is first performed, estimating first coordinates of feature points in the user's face with respect to the terminal device, and estimates of fourth coordinates of the user's ears with respect to other feature points in the user's face (e.g., canthus, nose tip, mouth angle, etc.). Then, images of two side surfaces of the face of the user are collected and visually analyzed, and then first coordinates of the feature points in the face of the user relative to the terminal equipment and fourth coordinates of ears of the user relative to other feature points in the face of the user (such as canthus, nose tip, mouth angle and the like) are estimated. With these estimates of the coordinates an estimate of the distance can be obtained, which estimate of the first sound transfer function can be made when the user is using the headset. Optionally, the visual analysis module may also be an estimation and combination of distances between other facial feature points.
303. And the terminal equipment obtains the position of the noise source according to the sound signal.
Optionally, the acquiring, by the terminal device, the sound signal may include: the terminal device obtains a position of the noise source according to the plurality of sound signals collected by the plurality of microphones and the terminal device obtains the position of the noise source according to the sound signals, and the method may include: the terminal equipment calculates the arrival time difference by using a phase weighted cross-correlation method for the plurality of sound signals; and the terminal equipment obtains the position of the noise source according to the arrival time difference.
Optionally, the noise source location may include: angle of Arrival (DOA) of the noise source, or angle of Arrival and distance of the noise source.
It is understood that the sound source localization module in the terminal device: the noise source is localized by using a multi-channel sound signal, i.e. a plurality of noise signals, collected by a microphone array on the terminal device. According to the number of microphones and the actual noise reduction requirement, only the Arrival angle (DOA) estimation of the noise source may be performed, and the distance estimation of the noise source may be further performed. In locating a noise source, time difference of Arrival (TDOA) estimation may be performed by using a Phase Transform (PHAT) Generalized Cross Correlation (GCC) method, and then the distance between the DOA of the noise source and the TDOA may be obtained by combining the lattice information of the microphone array. It should be noted that the DOA of the noise source is a direction relative to the plane of the terminal device, and the absolute DOA of the noise source is obtained by obtaining the current tilt angle of the terminal device itself in combination with the reading of the IMU in the terminal device. In this mechanism, the detection result of the absolute DOA may be different depending on the reading of the IMU when the user holds the terminal device in different ways, e.g. in a horizontal or vertical grip.
It should be noted that the timing sequence of steps 302 and 303 is not limited.
304. And the terminal equipment calculates to obtain a first sound transfer function between the ear of the user and the terminal equipment according to the first coordinate and the position of the noise source.
Optionally, the method may further include: the terminal equipment obtains the current inclination angle of the terminal equipment through an inertia measurement unit; the terminal device calculates a first sound transfer function between the ear of the user and the terminal device according to the first coordinate and the position of the noise source, and may include: and the terminal equipment calculates to obtain a first sound transfer function between the ear of the user and the terminal equipment according to the first coordinate, the position of the noise source and the current inclination angle of the terminal equipment.
Optionally, the terminal device obtains the first acceleration; and the terminal equipment judges whether the collision occurs according to at least one of the sound signal, the current inclination angle and the first acceleration. For example, the terminal device may directly read the first acceleration through the inertial measurement unit.
Optionally, the terminal device determines whether a collision occurs according to at least one of the sound signal, the current tilt angle, and the first acceleration, including but not limited to the following implementation manners:
(1) The terminal equipment determines whether the terminal equipment is collided or not according to the sound signal by a method based on short-time energy analysis and spectral feature analysis or a method based on a deep neural network; and/or the presence of a gas in the gas,
(2) The terminal equipment determines whether the terminal equipment is collided or not according to the first acceleration; and/or the presence of a gas in the gas,
(3) The terminal equipment determines a second acceleration according to the current inclination angle; and determining whether the terminal equipment collides or not according to the second acceleration.
Optionally, the determining, by the terminal device, whether the terminal device collides according to the first acceleration may include: the terminal device determines a first difference absolute value between the first acceleration and a first acceleration of a previous period, and if the first difference absolute value is smaller than a first preset threshold, it is determined that the terminal device is not collided, and the first acceleration is the acceleration of the current period.
Optionally, the terminal device determines a second acceleration according to the current tilt angle; determining whether the terminal device has a collision according to the second acceleration may include: the terminal equipment determines a second acceleration according to the current inclination angle, wherein the second acceleration is the acceleration of the current period; determining a second difference absolute value of the second acceleration and a second acceleration in a previous period; and if the second difference absolute value is smaller than the first preset threshold, determining that the terminal equipment is not collided, wherein the second acceleration in the previous period is determined according to the inclination angle in the previous period.
It is understood that the collision detection module in the terminal device: and judging whether the terminal equipment is collided or not by utilizing the reading and/or the audio analysis of the IMU. In the event of a collision, the microphone array in the terminal device may capture an abnormal impact, which may have a serious impact on the algorithm processing, and may even generate an impact noise with a larger volume in the user's headset. Therefore, when a collision is detected, the subsequent active noise reduction processing can be temporarily disabled, because in the embodiment of the present invention, the terminal device mainly performs noise reduction processing, and if the terminal device itself becomes a noise source, the effect of performing noise reduction processing is poor. For example: when the readings of the IMU can be used for collision detection, this can be done by detecting a sudden change in the acceleration value it outputs. When the collision detection is performed based on the audio analysis, the collision detection may be performed by a method based on short-time energy analysis and spectral feature analysis, or may be performed by a method based on DNN. Specifically, a neural network classifier is trained through a large amount of collided recording data to perform detection through a DNN-based method, or a neural network classifier is trained through a large amount of common noise data to perform detection. Optionally, the reading of the IMU and the detection result of the audio analysis may be smoothed together to improve the robustness of the detection.
Optionally, the terminal device calculates a first sound transfer function between the ear of the user and the terminal device, which may include but is not limited to the following implementation manners:
(1) According to the first coordinate and the position of the noise source, carrying out interpolation calculation on a plurality of transfer functions measured in advance under the matching condition to obtain a first sound transfer function between the ear of the user and the terminal equipment; or the like, or, alternatively,
(2) According to the first coordinate and the position of the noise source, obtaining a first sound transfer function between the ear of the user and the terminal equipment by a method based on a deep neural network; or the like, or, alternatively,
(3) Carrying out interpolation calculation on a plurality of transfer functions measured in advance under the matching condition according to the first coordinate, the position of the noise source and the current inclination angle of the terminal equipment to obtain a first sound transfer function between the ear of the user and the terminal equipment; or the like, or, alternatively,
(4) According to the first coordinate, the position of the noise source and the current inclination angle of the terminal equipment, a first sound transfer function between the ear of the user and the terminal equipment is obtained through a method based on a deep neural network; or the like, or a combination thereof,
(5) According to the third coordinate and the position of the noise source, carrying out interpolation calculation on a plurality of transfer functions measured in advance under the matching condition to obtain a first sound transfer function between the ear of the user and the terminal equipment; or the like, or a combination thereof,
(6) And according to the third coordinate and the position of the noise source, obtaining a first sound transfer function between the ear of the user and the terminal equipment by a method based on a deep neural network.
(7) Performing interpolation calculation on a plurality of transfer functions measured in advance under the matching condition according to the third coordinate, the position of the noise source and the current inclination angle of the terminal equipment to obtain a first sound transfer function between the ear of the user and the terminal equipment; or the like, or, alternatively,
(8) And obtaining a first sound transfer function between the ear of the user and the terminal equipment by a method based on a deep neural network according to the third coordinate, the position of the noise source and the current inclination angle of the terminal equipment.
It is understood that the estimation of the first acoustic transfer function may be obtained by interpolating a set of acoustic transfer functions measured in advance under an approximate condition according to the third coordinate, the noise source location, and the current tilt angle of the terminal device; the corresponding first acoustic transfer function can also be generated by a DNN-based method, i.e. training a neural network transfer function generator by a large number of previously measured acoustic transfer functions and corresponding third coordinates, noise source location and current tilt angle of the terminal device.
Alternatively, the estimate of the first sound transfer function may also be approximated as estimating phase and amplitude differences from the microphone array to the user's ear, so that the filtering process may also be simplified as a panning and scaling process. To estimate the phase difference and the amplitude difference, only the distance difference between the user ear and the microphone array with respect to the noise direction needs to be obtained. In computationally constrained systems, the use of approximation algorithms can increase the computational speed of binaural filtering.
Optionally, the facial image of the user includes a user ear contour feature, and may further include: and the terminal equipment corrects the first sound transfer function according to the ear profile characteristics of the user to obtain the corrected first sound transfer function.
It can be understood that the face 3D modeling module in the terminal device: it is also possible to detect the shape characteristics of the pinna of the user and to modify the first acoustic transfer function to a certain extent on the basis of these characteristics.
305. The terminal device obtains a second sound transfer function between the currently used earphone housing and the ear of the user.
Optionally, the obtaining, by the terminal device, a second transfer function from the currently used earphone housing to the ear of the user may include: the terminal equipment acquires the model of the currently used earphone; the terminal equipment searches and obtains a second transfer function from the currently used earphone shell to the ear of the user in an earphone model database according to the model of the currently used earphone, and the relation between earphones with different models and the corresponding transfer function is stored in the earphone model database.
Optionally, the obtaining, by the terminal device, the model of the currently used headset may include, but is not limited to, the following implementation manners:
(1) The terminal equipment displays a preset earphone model list; the terminal device responds to the selection of the user on the preset earphone model list to acquire the model of the currently used earphone, which can be shown in fig. 4C; or the like, or, alternatively,
(2) The terminal equipment acquires the image of the currently used earphone; the terminal device matches the image of the currently used earphone with a preset earphone image to obtain the model of the currently used earphone, which can be referred to as the model shown in fig. 4D; or the like, or, alternatively,
(3) The terminal device responds to the input operation of the user on the currently used earphone model to obtain the currently used earphone model, which can be referred to as shown in fig. 4E; or the like, or a combination thereof,
(4) When the terminal equipment detects that a current user uses an earphone to connect the terminal equipment, acquiring a front face image of the current user, and calling a face recognition module of a system to obtain a current identity of the current user; and the terminal equipment reads the earphone model list of the current user from the storage area according to the current identity and prompts the current user to select the model of the currently used earphone.
It can be understood that the earphone model identification module in the terminal device: the method has the main functions that when a user does not know the model of the earphone of the user, the image of the earphone currently used is collected to search and match from an earphone image database on the Internet, and the model with the appearance closest to the known earphone model is found out. The headphone transfer function of the closest headphone model may then be approximated as a characteristic of the user's use of the headphone and used during the active noise reduction algorithm. Optionally, the identification of the headset model may be completed by training a Deep Neural Network (DNN) through a large number of headset pictures of known headset models.
In the embodiment of the invention, the transfer function of the earphone is obtained by using the earphone model database, and when a user does not know the earphone model of the user, the type of the earphone can be roughly estimated by using a camera and visual analysis on the terminal equipment to improve the noise reduction effect, so that the noise reduction effect is highly adaptive to the earphone characteristic.
It should be noted that the timing sequence of step 305 and steps 302-304 is not limited.
306. And the terminal equipment performs first filtering and down-mixing processing on the sound signal according to the first sound transfer function to obtain an out-of-ear noise signal.
It can be understood that, a binaural filtering and downmixing module in the terminal device is configured to perform binaural filtering and downmixing on the multi-channel sound signals collected by the microphone array, so as to simulate an out-of-ear noise signal that should be received at a position where two ears of the user are located.
307. And the terminal equipment performs second filtering processing on the noise signal outside the ear according to the second sound transfer function to obtain the noise signal inside the ear.
It can be understood that, the ear characteristic filtering module in the terminal device performs filtering processing on the out-of-ear noise signal by using a second sound transfer function from the housing to the inside of the ear corresponding to the ear set used by the current user, so as to obtain an estimated in-ear noise signal. And the second sound transfer function is searched from the earphone model database according to the model of the earphone. Such a headset model database may be built by measuring the transfer functions of common headset models in advance. When the earphone used by the user is not in the earphone model database, the earphone model closest to the appearance of the earphone of the user can be matched from the earphone model database through the earphone model identification module and the corresponding sound transfer function is used.
308. And the terminal equipment performs noise reduction processing on the output signal of the currently used earphone according to the in-ear noise signal.
Optionally, the performing, by the terminal device, noise reduction processing on the output signal of the currently used earphone according to the in-ear noise signal may include: the terminal equipment performs phase inversion processing on the in-ear noise signal to obtain an in-ear noise signal subjected to phase inversion processing; the terminal device superimposes the in-ear noise signal processed in reverse on the output signal to the currently used earphone.
It can be understood that the mixing module in the terminal device simply performs inverse processing on the estimation of the in-ear noise and then superimposes the estimation on the original output signal of the earphone, so as to suppress the ear noise signal in the original output signal.
Optionally, the noise source is a moving noise source. The microphone array and the stronger processing capacity on the terminal equipment are utilized, and the IMU is combined, so that the mobile noise source can be accurately positioned and tracked, and a better noise reduction effect is achieved.
Optionally, the frequency of the sound signal is smaller than a second preset threshold. It can be understood that, under the influence of the positioning accuracy of the noise source and the estimation accuracy of the distance between the terminal device and the ear of the user, a relatively large phase error may exist for the sound signal with a relatively high frequency, so that the noise reduction effect is influenced, and even the energy of the high-frequency noise is amplified. Therefore, the embodiment of the invention can focus on the processing of low-frequency noise, for example, when the cut-off frequency is 200Hz, only the part below 200Hz in the ear noise signal is processed, and the part above 200Hz can not be processed. The 200Hz is only illustrative and is not intended to limit the scope of the present invention. Optionally, the specific cut-off frequency may be determined jointly according to the microphone array type of the terminal device, the accuracy of the noise source positioning algorithm, the accuracy of the visual positioning algorithm, and the like.
Optionally, the usage scenario applied in the embodiment of the present invention may be a scenario in which low-frequency noise is dominant, or a scenario in which a mobile noise source is present, or the like. For example: public transportation, roadways, indoor environments with air conditioning or fans, etc.
In the embodiment of the invention, the traditional active noise reduction process is carried out by putting an earphone on the terminal equipment, and the noise reduction effect is optimized by combining multi-mode information such as a microphone array, a camera, an IMU and the like on the terminal equipment and an audio and visual algorithm. The user can reduce noise when watching videos (such as movies, TV shows, art shows and the like) or playing games and the like by using the common earphones without additional cost, and experiences better sound effect. Under the first play mode, a camera and a visual ranging means on the terminal equipment can be utilized to obtain more accurate sound transfer function estimation, so that a better noise reduction effect is obtained.
The sound transfer function comprises a first sound transfer function between the ear of a user and the terminal equipment and a second sound transfer function from the earphone shell used by the user to the ear currently, and the terminal equipment performs first filtering and down-mixing processing on a sound signal by the first sound transfer function to obtain an out-of-ear noise signal; then, second filtering processing is carried out on the noise signal outside the ear through a second sound transfer function, and an in-ear noise signal is obtained; at this moment, the obtained in-ear noise signal is closer to the ear noise signal actually received by the user, and then the in-ear noise signal is subjected to phase inversion processing and superposed on the output signal of the currently used earphone, so that the noise reduction effect is better.
As shown in fig. 5, which is a schematic diagram of another embodiment of the noise reduction method in the embodiment of the present invention, the method may include:
501. the terminal equipment collects sound signals.
It should be noted that step 501 in the embodiment of the present invention may refer to step 301 in the embodiment shown in fig. 3, and is not described herein again.
502. The terminal equipment receives a user using gesture and a terminal equipment placing position input by a user under the condition that the terminal equipment is detected to be in a second playing mode, wherein the second playing mode is a playing mode that the face of the user does not face the terminal equipment.
Optionally, if the terminal device automatically detects that the terminal device is currently in the second play mode, the terminal device may output second prompt information, where the second prompt information is used to prompt the user to input the use posture and the position where the terminal device is placed; the terminal equipment responds to the input operation of the user and obtains the using posture of the user and the position where the terminal equipment is placed.
Illustratively, the second play mode is a scene that the user does not face the screen of the terminal device, and uses the terminal device to listen to songs, broadcasts, and the like. The user can be in standing posture, sitting posture, lying prone posture and other use postures; the user can select the approximate position of the terminal device on the interactive interface of the terminal device, and the terminal device can be placed at the position where the ear of the user is relatively fixed, such as on a desktop, in a pocket, in a trousers pocket, in a bag or a backpack, and does not face the screen, such as listening to music. Fig. 4G is a schematic diagram illustrating that the terminal device is in the second play mode in the embodiment of the present invention.
503. The terminal device determines first coordinates of the feature points in the user face relative to the terminal device according to the user using gesture and the position where the terminal device is placed.
In the second playing mode, the terminal device calculates a first coordinate of the feature point in the user face relative to the terminal device according to the user using gesture selected by the user and the approximate position where the terminal device is placed.
Optionally, the terminal device may obtain, according to the first coordinate of the feature point in the user plane relative to the terminal device, a first distance between the feature point in the user plane and the terminal device.
504. And the terminal equipment obtains the position of the noise source according to the sound signal.
505. And the terminal equipment calculates to obtain a first sound transfer function between the ear of the user and the terminal equipment according to the first coordinate and the position of the noise source.
506. The terminal device obtains a second sound transfer function between the currently used earphone housing and the ear of the user.
507. And the terminal equipment performs first filtering and down-mixing processing on the sound signal according to the first sound transfer function to obtain an out-of-ear noise signal.
508. And the terminal equipment performs second filtering processing on the noise signal outside the ear according to the second sound transfer function to obtain the noise signal inside the ear.
509. And the terminal equipment performs noise reduction processing on the output signal of the currently used earphone according to the in-ear noise signal.
It should be noted that, in the embodiments of the present invention, steps 504 to 509 may refer to steps 303 to 308 in the embodiment shown in fig. 3, and are not described herein again.
In the embodiment of the invention, the traditional active noise reduction process is carried out from the earphone to the terminal equipment, so that a user can experience better sound effect after carrying out noise reduction processing on listening to music and the like by using a common earphone without extra cost. In a second playing mode, a first coordinate of a characteristic point in a user face relative to the terminal equipment can be obtained through calculation according to a user using posture selected by a user and an approximate position where the terminal equipment is placed, a first sound transfer function between the ear of the user and the terminal equipment is obtained through combination of a noise source position and a self inclination angle of the intelligent terminal, and then a second sound transfer function from the earphone shell currently used by the user to the ear is combined; the terminal equipment performs first filtering and down-mixing processing on the sound signal through a first sound transfer function to obtain an external-ear noise signal; then, second filtering processing is carried out on the noise signal outside the ear through a second sound transfer function, and an in-ear noise signal is obtained; at this moment, the obtained in-ear noise signal is closer to the ear noise signal actually received by the user, and then the in-ear noise signal is subjected to phase inversion processing and superposed on the output signal of the currently used earphone, so that the noise reduction effect is better.
In the second play mode, the terminal device calculates a first coordinate of the feature point in the user face relative to the terminal device according to the user use posture selected by the user and the approximate position where the terminal device is placed, and depending on the approximate estimation, the noise reduction effect may be inferior to that in the first play mode.
It should be noted that, noise reduction is performed on the terminal device in cooperation with a common earphone, and a certain noise reduction effect can be used as a way of proving evidence when the common earphone is used for testing and connecting the terminal device. In the combination of IMU and noise source location, the IMU can be forbidden, then whether the noise reduction effect to the noise source is worsened is tested, whether the noise reduction effect is worsened when the mobile phone holding mode (such as horizontal holding, vertical holding and the like) is changed, and the method can also be used as a mode for proving. The characteristic of utilizing the camera to carry out human face 3D modeling can also be demonstrated through the actual registration and use process of products. The real-time distance measurement part can be proved by forbidding the camera, moving the terminal equipment back and forth in the using process and testing whether the noise reduction effect is deteriorated. The transfer function of the earphone is obtained by utilizing the earphone model database, and the actual registration and use flow of the product can also prove.
As shown in fig. 6A, which is a schematic diagram of another embodiment of the terminal device in the embodiment of the present invention, the method may include:
the acquisition module 601 is used for acquiring a sound signal;
a processing module 602, configured to obtain a first sound transfer function between an ear of a user and the terminal device; processing the sound signal according to the first sound transfer function to obtain an ear noise signal; and according to the ear noise signal, carrying out noise reduction processing on an output signal of the currently used earphone.
It should be noted that the terminal device shown in fig. 6A may correspondingly execute the foregoing method embodiment, and details are not described here again.
As shown in fig. 6B, which is a schematic diagram of another embodiment of the terminal device in the embodiment of the present invention, the method may include:
fig. 6B is a block diagram illustrating a partial structure of a mobile phone related to a terminal device according to an embodiment of the present invention. Referring to fig. 6B, the handset includes: radio Frequency (RF) circuit 610, memory 620, input unit 630, display unit 640, sensor 650, audio circuit 660, wireless fidelity (WiFi) module 670, processor 680, and power supply 690. Those skilled in the art will appreciate that the handset configuration shown in fig. 6B is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following specifically describes each component of the mobile phone with reference to fig. 6B:
the RF circuit 610 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, receives downlink information of a base station and then processes the received downlink information to the processor 680; in addition, data for designing uplink is transmitted to the base station. In general, RF circuit 610 includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuitry 610 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communication (GSM), general Packet Radio Service (GPRS), code Division Multiple Access (CDMA), wideband Code Division Multiple Access (WCDMA), long Term Evolution (LTE), email, short Messaging Service (SMS), and the like.
The memory 620 may be used to store software programs and modules, and the processor 680 may execute various functional applications and data processing of the mobile phone by operating the software programs and modules stored in the memory 620. The memory 620 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, etc. Further, the memory 620 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 630 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone. Specifically, the input unit 630 may include a touch panel 631 and other input devices 632. The touch panel 631, also referred to as a touch screen, may collect touch operations of a user (e.g., operations of the user on the touch panel 631 or near the touch panel 631 by using any suitable object or accessory such as a finger or a stylus) thereon or nearby, and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 631 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 680, and can receive and execute commands sent by the processor 680. In addition, the touch panel 631 may be implemented by various types such as resistive, capacitive, infrared, and surface acoustic wave. The input unit 630 may include other input devices 632 in addition to the touch panel 631. In particular, other input devices 632 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 640 may be used to display information input by the user or information provided to the user and various menus of the mobile phone. The Display unit 640 may include a Display panel 641, and optionally, the Display panel 641 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 631 may cover the display panel 641, and when the touch panel 631 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 680 to determine the type of the touch event, and then the processor 680 provides a corresponding visual output on the display panel 641 according to the type of the touch event. Although in fig. 6B, the touch panel 631 and the display panel 641 are implemented as two separate components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 631 and the display panel 641 may be integrated to implement the input and output functions of the mobile phone.
The handset may also include at least one sensor 650, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display panel 641 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 641 and/or the backlight when the mobile phone is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the gesture of the mobile phone (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
Audio circuitry 660, speaker 661, and microphone 662 can provide an audio interface between a user and a cell phone. The audio circuit 660 may transmit the electrical signal converted from the received audio data to the speaker 661, and convert the electrical signal into an audio signal through the speaker 661 for output; on the other hand, the microphone 662 converts the collected sound signals into electrical signals, which are received by the audio circuit 660 and converted into audio data, which are processed by the audio data output processor 680 and then transmitted via the RF circuit 610 to, for example, another cellular phone, or output to the memory 620 for further processing.
WiFi belongs to short-distance wireless transmission technology, and the mobile phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 670, and provides wireless broadband Internet access for the user. Although fig. 6B shows the WiFi module 670, it is understood that it does not belong to the essential constitution of the handset, and can be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 680 is a control center of the mobile phone, and connects various parts of the entire mobile phone by using various interfaces and lines, and performs various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 620 and calling data stored in the memory 620, thereby integrally monitoring the mobile phone. Optionally, processor 680 may include one or more processing units; preferably, the processor 680 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 680.
The handset also includes a power supply 690 (e.g., a battery) for powering the various components, which may preferably be logically connected to the processor 680 via a power management system, such that the power management system may be used to manage charging, discharging, and power consumption.
Although not shown, the mobile phone may further include a camera, a bluetooth module, etc., which will not be described herein.
In the present embodiment, the microphone 662 is used to collect a sound signal;
a processor 680, configured to obtain a first sound transfer function between the ear of the user and the terminal device; processing the sound signal according to the first sound transfer function to obtain an ear noise signal; and performing noise reduction processing on the output signal of the currently used earphone according to the ear noise signal.
Optionally, the processor 680 is further configured to obtain a second sound transfer function between the currently used earphone housing and the ear of the user.
Optionally, the processor 680 is specifically configured to process the sound signal according to the first sound transfer function and the second sound transfer function, so as to obtain the ear noise signal.
Optionally, the processor 680 is specifically configured to perform first filtering and downmix processing on the sound signal according to the first sound transfer function to obtain an external-ear noise signal; performing second filtering processing on the out-of-ear noise signal according to the second sound transfer function to obtain an in-ear noise signal; and performing noise reduction processing on the output signal of the currently used earphone according to the in-ear noise signal.
Optionally, the processor 680 is specifically configured to, when it is detected that the current terminal device is in a first play mode, obtain a first coordinate of a feature point in the user face relative to the terminal device, where the first play mode is a play mode in which the user face faces the terminal device; obtaining the position of a noise source according to the sound signal; and calculating to obtain a first sound transfer function between the ear of the user and the terminal equipment according to the first coordinate and the position of the noise source.
Optionally, the processor 680 is specifically configured to obtain a third coordinate of the user ear relative to the terminal device according to the first coordinate and a preset second coordinate of the user ear and a feature point in the user face; and calculating to obtain a first sound transfer function between the ear of the user and the terminal equipment according to the third coordinate and the position of the noise source.
Optionally, the processor 680 is specifically configured to receive a user using gesture and a location where the terminal device is placed, which are input by a user, when it is detected that the current terminal device is in a second play mode, where the second play mode is a play mode in which a face of the user does not face the terminal device; determining a first coordinate of a feature point in the user face relative to the terminal equipment according to the user using gesture and the position where the terminal equipment is placed; obtaining the position of a noise source according to the sound signal; and calculating to obtain a first sound transfer function between the ear of the user and the terminal equipment according to the first coordinate and the position of the noise source.
Optionally, the processor 680 is further configured to obtain a current tilt angle of the terminal device through the inertia measurement unit; and calculating to obtain a first sound transfer function between the ear of the user and the terminal equipment according to the first coordinate, the position of the noise source and the current inclination angle of the terminal equipment.
Optionally, the processor 680 is specifically configured to acquire an image of a face of the user through a camera for visual analysis, or acquire a first coordinate of a feature point in the face of the user relative to the terminal device through an ultrasonic ranging method.
Optionally, the processor 680 is further configured to obtain a fourth coordinate of the ear of the user relative to the feature point in the face of the user according to the face image of the user and a face 3D model preset by the user; obtaining a fifth coordinate of the ear of the user relative to the terminal equipment according to the first coordinate and the fourth coordinate; and calculating to obtain a first sound transfer function between the ear of the user and the terminal equipment according to the fifth coordinate and the position of the noise source.
Optionally, the processor 680 is further configured to modify the first sound transfer function according to the ear contour feature of the user, so as to obtain a modified first sound transfer function.
Optionally, the processor 680 is further configured to perform interpolation calculation on a plurality of transfer functions measured in advance under a matching condition according to the first coordinate, the noise source location, and the current tilt angle of the terminal device, to obtain a first sound transfer function between the ear of the user and the terminal device; or, according to the first coordinate, the position of the noise source and the current inclination angle of the terminal device, obtaining a first sound transfer function between the ear of the user and the terminal device by a method based on a deep neural network.
Optionally, the microphone 662 is specifically configured to collect a plurality of sound signals by the terminal device through a plurality of microphones;
a processor 680, specifically configured to calculate a time difference of arrival for the multiple sound signals by using a phase-weighted cross-correlation method; and obtaining the position of the noise source according to the arrival time difference.
Optionally, the processor 680 is specifically configured to obtain a model of a currently used headset; and searching and obtaining a second transfer function from the currently used earphone shell to the ear of the user in an earphone model database according to the model of the currently used earphone, wherein the relation between earphones of different models and the corresponding transfer function is stored in the earphone model database.
Optionally, the display unit 640 is configured to display a preset headset model list; a processor 680, configured to respond to a selection of a preset earphone model list by a user, and obtain a model of a currently used earphone; or the like, or, alternatively,
the camera is used for collecting images of the currently used earphone; the processor 680 is specifically configured to match the image of the currently used earphone with a preset earphone image to obtain the model of the currently used earphone; or the like, or a combination thereof,
the processor 680 is specifically configured to respond to an input operation of the user on the model of the currently used headset, and obtain the model of the currently used headset.
Optionally, the processor 680 is further configured to obtain a first acceleration; and judging whether the collision occurs according to at least one of the sound signal, the current inclination angle and the first acceleration.
Optionally, the processor 680 is further configured to determine whether the terminal device is in a collision according to the sound signal by a method based on short-time energy analysis and spectral feature analysis, or by a method based on a deep neural network; and/or the presence of a gas in the gas,
the processor 680 is further configured to determine whether the terminal device is collided according to the first acceleration; and/or the presence of a gas in the atmosphere,
a processor 680, further configured to determine a second acceleration according to the current tilt angle; and determining whether the terminal equipment collides or not according to the second acceleration.
Optionally, the frequency of the sound signal is smaller than a second preset threshold.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, it may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that a computer can store or a data storage device, such as a server, a data center, etc., that is integrated with one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), among others.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one type of logical functional division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention, which is substantially or partly contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (19)

1. A method of noise reduction, comprising:
the terminal equipment collects sound signals;
the terminal equipment acquires a first sound transfer function between the ear of a user and the terminal equipment;
the terminal equipment processes the sound signal according to the first sound transfer function to obtain an ear noise signal; and
the terminal equipment carries out noise reduction processing on an output signal of the currently used earphone according to the ear noise signal;
the terminal equipment acquires the ear of a user and a first sound transfer function between the terminal equipment, and the method comprises the following steps:
the method comprises the steps that under the condition that the terminal equipment detects that the terminal equipment is in a first playing mode at present, a first coordinate of a feature point in a user face relative to the terminal equipment is obtained, and the first playing mode is a playing mode that the user face faces the terminal equipment; the terminal equipment obtains the position of a noise source according to the sound signal; the terminal equipment calculates to obtain a first sound transfer function between the ear of the user and the terminal equipment according to the first coordinate and the position of the noise source;
alternatively, the first and second liquid crystal display panels may be,
the method comprises the steps that the terminal equipment receives a user using gesture and a terminal equipment placing position input by a user under the condition that the terminal equipment is detected to be in a second playing mode, wherein the second playing mode is a playing mode that the face of the user does not face the terminal equipment; the terminal equipment determines a first coordinate of a feature point in a user face relative to the terminal equipment according to the user using gesture and the position where the terminal equipment is placed; the terminal equipment obtains the position of a noise source according to the sound signal; and the terminal equipment calculates to obtain a first sound transfer function between the ear of the user and the terminal equipment according to the first coordinate and the position of the noise source.
2. The method of claim 1, further comprising:
the terminal device obtains a second sound transfer function between the currently used earphone shell and the ear of the user.
3. The method of claim 2, wherein the terminal device processes the sound signal according to the first sound transfer function to obtain an ear noise signal, and comprises:
and the terminal equipment processes the sound signal according to the first sound transfer function and the second sound transfer function to obtain an ear noise signal.
4. The method of claim 3, wherein the terminal device processes the sound signal according to the first sound transfer function and the second sound transfer function to obtain an ear noise signal, and comprises:
the terminal equipment performs first filtering and down-mixing processing on the sound signal according to the first sound transfer function to obtain an out-of-ear noise signal;
the terminal equipment performs second filtering processing on the noise signal outside the ear according to the second sound transfer function to obtain a noise signal inside the ear;
the terminal equipment carries out noise reduction processing on the output signal of the currently used earphone according to the ear noise signal, and the noise reduction processing comprises the following steps:
and the terminal equipment carries out noise reduction processing on the output signal of the currently used earphone according to the in-ear noise signal.
5. The method of claim 1, wherein the terminal device calculates a first sound transfer function between the user's face and the terminal device according to the first coordinate and the position of the noise source, and comprises:
the terminal equipment obtains a third coordinate of the user ear relative to the terminal equipment according to the first coordinate and a second coordinate of a preset feature point in the user ear and the user face;
and the terminal equipment calculates to obtain a first sound transfer function between the ear of the user and the terminal equipment according to the third coordinate and the position of the noise source.
6. The method according to claim 1 or 5, characterized in that the method further comprises:
the terminal equipment obtains a current inclination angle of the terminal equipment through an inertia measurement unit;
the terminal equipment calculates and obtains a first sound transfer function between the ear of the user and the terminal equipment according to the first coordinate and the position of the noise source, and the method comprises the following steps:
and the terminal equipment calculates to obtain a first sound transfer function between the ear of the user and the terminal equipment according to the first coordinate, the position of the noise source and the current inclination angle of the terminal equipment.
7. The method of claim 1, wherein obtaining the first coordinates of the feature point in the user plane relative to the terminal device comprises:
the terminal equipment acquires a face image of a user through a camera for visual analysis, or acquires a first coordinate of a feature point in the face of the user relative to the terminal equipment through an ultrasonic distance measurement method.
8. The method of claim 7, further comprising:
the terminal equipment obtains a fourth coordinate of the ear of the user relative to the feature point in the face of the user according to the face image of the user and the face 3D model preset by the user;
the terminal equipment calculates and obtains a first sound transfer function between the ear of the user and the terminal equipment according to the first coordinate and the position of the noise source, and the method comprises the following steps:
obtaining a fifth coordinate of the user ear relative to the terminal equipment according to the first coordinate and the fourth coordinate;
and the terminal equipment calculates to obtain a first sound transfer function between the ear of the user and the terminal equipment according to the fifth coordinate and the position of the noise source.
9. The method of claim 7 or 8, wherein the user facial image includes user ear contour features, the method further comprising:
and correcting the first sound transfer function according to the ear contour features of the user to obtain a corrected first sound transfer function.
10. The method of claim 6, wherein the terminal device calculates a first sound transfer function between the user's ear and the terminal device according to the first coordinate, the noise source position and the current tilt angle of the terminal device, and comprises:
carrying out interpolation calculation on a plurality of transfer functions measured in advance under a matching condition according to the first coordinate, the position of the noise source and the current inclination angle of the terminal equipment to obtain a first sound transfer function between the ear of the user and the terminal equipment; or the like, or a combination thereof,
and obtaining a first sound transfer function between the ear of the user and the terminal equipment by a method based on a deep neural network according to the first coordinate, the position of the noise source and the current inclination angle of the terminal equipment.
11. The method according to claim 1 or 5, wherein the terminal device collects a sound signal, comprising:
the terminal equipment collects a plurality of sound signals through a plurality of microphones;
the terminal equipment obtains the position of a noise source according to the sound signal, and the method comprises the following steps:
the terminal equipment calculates and obtains arrival time difference by using a phase weighted cross-correlation method for the sound signals;
and the terminal equipment obtains the position of the noise source according to the arrival time difference.
12. The method according to any of claims 2-4, wherein the terminal device obtaining a second transfer function currently using the earphone housing into the ear of the user comprises:
the terminal equipment acquires the model of the currently used earphone;
and the terminal equipment searches and obtains a second transfer function from the currently used earphone shell to the ear of the user in an earphone model database according to the model of the currently used earphone, and the relation between earphones of different models and the corresponding transfer function is stored in the earphone model database.
13. The method of claim 11, wherein the terminal device obtains a model of a currently used headset, and comprises:
the terminal equipment displays a preset earphone model list; the terminal equipment responds to the selection of a user on a preset earphone model list and obtains the model of the currently used earphone; or the like, or, alternatively,
the terminal equipment collects the image of the currently used earphone; the terminal equipment matches the image of the currently used earphone with a preset earphone image to obtain the model of the currently used earphone; or the like, or, alternatively,
and the terminal equipment responds to the input operation of the user on the type of the currently used earphone to obtain the type of the currently used earphone.
14. The method of claim 6, further comprising:
the terminal equipment acquires a first acceleration;
and the terminal equipment judges whether collision occurs or not according to at least one of the sound signal, the current inclination angle and the first acceleration.
15. The method of claim 14, wherein the determining, by the terminal device, whether a collision occurs according to at least one of the sound signal, the current tilt angle, and the first acceleration comprises:
the terminal equipment determines whether the terminal equipment is collided or not according to the sound signals by a method based on short-time energy analysis and spectral feature analysis or a method based on a deep neural network; and/or the presence of a gas in the gas,
the terminal equipment determines whether the terminal equipment collides or not according to the first acceleration; and/or the presence of a gas in the atmosphere,
the terminal equipment determines a second acceleration according to the current inclination angle; and determining whether the terminal equipment is collided according to the second acceleration.
16. The method according to any of claims 1-5, wherein the frequency of the sound signal is less than a second preset threshold.
17. A terminal device, comprising:
the acquisition module is used for acquiring sound signals;
the processing module is used for acquiring a first sound transfer function between the ear of the user and the terminal equipment; processing the sound signal according to the first sound transfer function to obtain an ear noise signal; according to the ear noise signal, carrying out noise reduction processing on an output signal of the currently used earphone;
the processing module is specifically configured to, when it is detected that the terminal device is currently in a first play mode, acquire a first coordinate of a feature point in a user face with respect to the terminal device, where the first play mode is a play mode in which the user face faces the terminal device; obtaining the position of a noise source according to the sound signal; according to the first coordinate and the position of the noise source, calculating to obtain a first sound transfer function between the ear of the user and the terminal equipment;
alternatively, the first and second liquid crystal display panels may be,
the processing module is specifically configured to receive a user using gesture input by a user and a position where the terminal device is placed, when it is detected that the current terminal device is in a second play mode, where the second play mode is a play mode in which a user face does not face the terminal device; determining a first coordinate of a feature point in a user face relative to the terminal equipment according to the user using gesture and the position where the terminal equipment is placed; obtaining the position of a noise source according to the sound signal; and calculating to obtain a first sound transfer function between the ear of the user and the terminal equipment according to the first coordinate and the position of the noise source.
18. A terminal device, comprising:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory for performing the method of any of claims 1-16.
19. A computer-readable storage medium comprising instructions that, when executed on a computer, cause the computer to perform the method of any one of claims 1-16.
CN202010981457.0A 2020-09-17 2020-09-17 Noise reduction method, terminal device and computer readable storage medium Active CN114205701B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010981457.0A CN114205701B (en) 2020-09-17 2020-09-17 Noise reduction method, terminal device and computer readable storage medium
PCT/CN2021/102907 WO2022057365A1 (en) 2020-09-17 2021-06-29 Noise reduction method, terminal device, and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010981457.0A CN114205701B (en) 2020-09-17 2020-09-17 Noise reduction method, terminal device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN114205701A CN114205701A (en) 2022-03-18
CN114205701B true CN114205701B (en) 2023-01-03

Family

ID=80644829

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010981457.0A Active CN114205701B (en) 2020-09-17 2020-09-17 Noise reduction method, terminal device and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN114205701B (en)
WO (1) WO2022057365A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116828354B (en) * 2023-08-30 2023-11-07 深圳市智纬科技有限公司 Radio quality optimization method and system for collar clamp wireless microphone
CN117072424B (en) * 2023-10-13 2023-12-12 意朗智能科技(南通)有限公司 Debugging method and system for reducing working noise of air compressor

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104780258A (en) * 2015-03-18 2015-07-15 北京佳讯飞鸿电气股份有限公司 Noise removing method based on acceleration sensor, host processor and dispatching terminal
CN105307081A (en) * 2014-07-31 2016-02-03 展讯通信(上海)有限公司 Voice signal processing system and method with active noise reduction
US9838812B1 (en) * 2016-11-03 2017-12-05 Bose Corporation On/off head detection of personal acoustic device using an earpiece microphone
CN108668188A (en) * 2017-03-30 2018-10-16 天津三星通信技术研究有限公司 The method and its electric terminal of the active noise reduction of the earphone executed in electric terminal
CN111665513A (en) * 2019-03-05 2020-09-15 阿尔派株式会社 Facial feature detection device and facial feature detection method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8649526B2 (en) * 2010-09-03 2014-02-11 Nxp B.V. Noise reduction circuit and method therefor
US8923522B2 (en) * 2010-09-28 2014-12-30 Bose Corporation Noise level estimator
US9955279B2 (en) * 2016-05-11 2018-04-24 Ossic Corporation Systems and methods of calibrating earphones

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105307081A (en) * 2014-07-31 2016-02-03 展讯通信(上海)有限公司 Voice signal processing system and method with active noise reduction
CN104780258A (en) * 2015-03-18 2015-07-15 北京佳讯飞鸿电气股份有限公司 Noise removing method based on acceleration sensor, host processor and dispatching terminal
US9838812B1 (en) * 2016-11-03 2017-12-05 Bose Corporation On/off head detection of personal acoustic device using an earpiece microphone
CN108668188A (en) * 2017-03-30 2018-10-16 天津三星通信技术研究有限公司 The method and its electric terminal of the active noise reduction of the earphone executed in electric terminal
CN111665513A (en) * 2019-03-05 2020-09-15 阿尔派株式会社 Facial feature detection device and facial feature detection method

Also Published As

Publication number Publication date
WO2022057365A1 (en) 2022-03-24
CN114205701A (en) 2022-03-18

Similar Documents

Publication Publication Date Title
US10445482B2 (en) Identity authentication method, identity authentication device, and terminal
WO2020098462A1 (en) Ar virtual character drawing method and apparatus, mobile terminal and storage medium
WO2020103548A1 (en) Video synthesis method and device, and terminal and storage medium
US20200112812A1 (en) Audio signal processing method, terminal and storage medium thereof
CN108989672B (en) Shooting method and mobile terminal
CN108668009B (en) Input operation control method, device, terminal, earphone and readable storage medium
CN107506732B (en) Method, device, mobile terminal and computer storage medium for mapping
CN114205701B (en) Noise reduction method, terminal device and computer readable storage medium
CN110460721B (en) Starting method and device and mobile terminal
CN107730460B (en) Image processing method and mobile terminal
CN109195058B (en) Earphone sound channel switching method, earphone sound channel switching device, terminal and storage medium
CN108391058A (en) Image capturing method, device, electronic device and storage medium
CN107463897A (en) Fingerprint identification method and mobile terminal
CN113365085A (en) Live video generation method and device
CN111738100A (en) Mouth shape-based voice recognition method and terminal equipment
CN109472825B (en) Object searching method and terminal equipment
CN109618055B (en) Position sharing method and mobile terminal
CN107782250A (en) A kind of depth information measuring method, device and mobile terminal
CN109587552B (en) Video character sound effect processing method and device, mobile terminal and storage medium
CN108307031B (en) Screen processing method, device and storage medium
CN109088980A (en) Sounding control method, device, electronic device and computer-readable medium
CN109089137A (en) Caton detection method and device
CN110168599B (en) Data processing method and terminal
WO2018227757A1 (en) Prompting method and vr device
CN107734147B (en) Step recording method, mobile terminal and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant