CN112130664B - Intelligent noise reduction method, intelligent wake-up method and device using same - Google Patents

Intelligent noise reduction method, intelligent wake-up method and device using same Download PDF

Info

Publication number
CN112130664B
CN112130664B CN202010972698.9A CN202010972698A CN112130664B CN 112130664 B CN112130664 B CN 112130664B CN 202010972698 A CN202010972698 A CN 202010972698A CN 112130664 B CN112130664 B CN 112130664B
Authority
CN
China
Prior art keywords
noise reduction
sound signal
user
intelligent
sound
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010972698.9A
Other languages
Chinese (zh)
Other versions
CN112130664A (en
Inventor
李艳玲
严肃
陈仁益
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics China R&D Center, Samsung Electronics Co Ltd filed Critical Samsung Electronics China R&D Center
Priority to CN202010972698.9A priority Critical patent/CN112130664B/en
Publication of CN112130664A publication Critical patent/CN112130664A/en
Application granted granted Critical
Publication of CN112130664B publication Critical patent/CN112130664B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G04HOROLOGY
    • G04GELECTRONIC TIME-PIECES
    • G04G13/00Producing acoustic time signals
    • G04G13/02Producing acoustic time signals at preselected times, e.g. alarm clocks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T90/00Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurosurgery (AREA)
  • Neurology (AREA)
  • Dermatology (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Quality & Reliability (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

An intelligent noise reduction method, an intelligent wake-up method and a device using the method are disclosed. An intelligent noise reduction method based on machine learning comprises the following steps: receiving a sound signal; determining whether noise reduction is required for the sound signal by inputting the sound signal into the intelligent noise reduction model and receiving response information of a user to the sound signal from the intelligent noise reduction model; under the condition that the noise of the sound signal is required to be reduced, the noise of the sound signal is reduced, so that the personalized requirements of different users are met.

Description

Intelligent noise reduction method, intelligent wake-up method and device using same
Technical Field
The present disclosure relates to the field of computer technology. More particularly, the present disclosure relates to an intelligent noise reduction method, an intelligent wake-up method, and an apparatus using the same.
Background
In daily life, external noise interference can seriously reduce sleeping, work, study, entertainment and the like of people, and further influence the quality of life, so that headphones with a noise reduction function or related noise reduction equipment are required. In this respect, the sponge earplug with the passive noise reduction function, the earphone with the active noise reduction function and the earplug products are relatively mature, and the corresponding products can be applied to various scenes such as entertainment, work, study, sleep, business trip, driving and the like, and the noise reduction function of the sponge earplug can be achieved by expanding the existing earphone devices such as the earphone devices and the sleeping earplug without adding excessive hardware devices.
On the other hand, the noise reduction earphone products can shield all external sounds, so that a user can miss some important or useful sound information, and some defects are brought, for example, when the user sleeps by wearing the noise reduction earphone, the user can miss the time of getting up and miss important telephones due to the fact that the user cannot hear the alarm sound; when the user sleeps, the user wears the noise-reducing earplug and can not hear the alarm, so that potential safety hazards are increased; when riding and driving, people wear noise reduction headphones to listen to music, and all surrounding sounds can be shielded, so that safety accidents can occur; parents with children sleeping cannot pacify the infant in time if they cannot hear the infant's cry because they wear noise reducing headphones.
Therefore, intelligent noise reduction is a trend of development of noise reduction headphones, and intelligent noise reduction can be selectively added with some intelligent selection functions on the basis of conventional noise reduction, such as adjusting noise reduction parameters according to surrounding environment, selecting whether noise reduction is performed according to frequency bands of the sound, and the like, so that all the sounds are prevented from being blindly shielded.
Meanwhile, the intelligent wake-up function is another important function of the intelligent noise reduction earphone and other related products. The smart wake-up function is for example embodied in: when the noise-reducing earphone is worn for sleeping, the alarm is not heard, if the alarm sound is preset in the earphone, the alarm can be regularly transmitted into the human ear through the earphone to wake up the user, and meanwhile, the interference of the alarm sound to surrounding roommates and the like is avoided.
The intelligent noise reduction and the intelligent wake-up in the prior art realize corresponding noise reduction and wake-up by presetting some conditions, such as noise reduction for specific sounds such as a preset sound database or a preset frequency band, or judge whether to wake up the user by taking the sleeping period of the user, the sleeping degree of the user or the biometric features as criteria. However, because different users have different physical conditions, different reactions to sound, different requirements for sleeping and different scenes, the noise reduction and wake-up conditions are preset and cannot be adjusted in real time along with the actual physical conditions and different scenes of different users, and the real intellectualization and individualization are not achieved.
BRIEF SUMMARY OF THE PRESENT DISCLOSURE
One aspect of the present disclosure is to provide an intelligent noise reduction method, an intelligent wake-up method, and an apparatus using the same, so as to meet the personalized needs of different users.
In one general aspect, there is provided a machine learning-based intelligent noise reduction method, comprising: receiving a sound signal; determining whether noise reduction is required for the sound signal by inputting the sound signal into the intelligent noise reduction model and receiving response information of a user to the sound signal from the intelligent noise reduction model; in the case where noise reduction is required for the sound signal, noise reduction is performed for the sound signal.
Optionally, the intelligent noise reduction model is trained based on the sound signal and brain wave response of the user to the sound signal.
Optionally, the sound signal is received through headphones.
Optionally, the user's reaction information to the sound signal includes user preference and sensitivity to the sound signal.
Optionally, training the intelligent noise reduction model includes training the intelligent noise reduction model by taking the sound signal and the brain wave response of the user to the sound signal as input and output samples of the intelligent noise reduction model.
Optionally, denoising the sound signal includes emitting a reverse sound wave to neutralize the sound signal.
In another general aspect, there is provided a machine learning based intelligent wake-up method, comprising: determining a scene where a user is located; monitoring brain wave signals of a user; the brain wave signals of the user are input into the intelligent awakening model, the user state corresponding to the brain wave signals of the user is received from the intelligent awakening model, and whether the user needs to be awakened or not is determined according to the user state and the determined scene of the user; and waking up the user in the case that the user state matches the determined scene in which the user is located.
Optionally, the intelligent wake-up model is trained according to the brain wave signals of the user and the user states corresponding to the brain wave signals confirmed through user feedback aiming at the scene of the user.
Optionally, the user is awakened by an alarm.
Optionally, the user state corresponding to the brain wave signal of the user includes drowsiness and sufficient sleep.
Optionally, training the intelligent wake-up model includes training the intelligent wake-up model by taking, as input and output samples of the intelligent wake-up model, a user state corresponding to the brain wave signal of the user and the brain wave signal confirmed via user feedback for a scene in which the user is located.
In another general aspect, there is provided a machine learning-based intelligent noise reduction device, comprising: a sound receiving module configured to receive a sound signal; a noise reduction determining module configured to determine whether noise reduction of the sound signal is required by inputting the sound signal to the intelligent noise reduction model and receiving reaction information of a user to the sound signal from the intelligent noise reduction model; the intelligent noise reduction module is configured to reduce noise of the sound signal under the condition that the noise of the sound signal is required to be reduced.
In another general aspect, there is provided a machine learning based intelligent wake-up device, comprising: the scene determining module is configured to determine a scene in which a user is located; the brain wave monitoring module is configured to monitor brain wave signals of a user; the wake-up determining module is configured to determine whether the user needs to be awakened according to the user state and the determined scene of the user by inputting the brain wave signals of the user into the intelligent wake-up model and receiving the user state corresponding to the brain wave signals of the user from the intelligent wake-up model; and the intelligent awakening module is configured to awaken the user under the condition that the user state is matched with the determined scene in which the user is positioned.
In another general aspect, there is provided a computer-readable storage medium storing a computer program which, when executed by a processor, implements the machine learning-based intelligent noise reduction method or the machine learning-based intelligent wake-up method as described above.
In another general aspect, there is provided a computing device comprising: a processor; and a memory storing a computer program which, when executed by the processor, implements the intelligent noise reduction method based on machine learning or the intelligent wake-up method based on machine learning as described above.
According to the intelligent noise reduction method, the intelligent wake-up method and the device using the method, according to the brain wave response training optimization intelligent noise reduction model of different users on the sound, the problems that different users prefer the same sound and have different sensitivity are solved well; according to the feedback training optimization intelligent wake-up model of different users on wake-up under different scenes, the problem that the different users have different wake-up demands under different scenes is well solved, and therefore personalized demands of the different users are met.
Additional aspects and/or advantages of the present general inventive concept will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the general inventive concept.
Drawings
The foregoing and other objects and features of exemplary embodiments of the present disclosure will become more apparent from the following description taken in conjunction with the accompanying drawings which illustrate the embodiments by way of example, in which:
FIG. 1 is a flowchart illustrating a machine learning based intelligent noise reduction method according to an exemplary embodiment of the present disclosure;
FIG. 2 is a flowchart illustrating a machine learning based smart wakeup method according to an example embodiment of the present disclosure;
FIG. 3 is a block diagram illustrating a machine learning based intelligent noise reducer according to an exemplary embodiment of the present disclosure;
FIG. 4 is a block diagram illustrating a machine learning based intelligent wake-up device in accordance with an exemplary embodiment of the present disclosure;
FIG. 5 is a flowchart illustrating intelligent noise reduction in different scenarios according to an exemplary embodiment of the present disclosure;
FIG. 6 is a flow diagram illustrating intelligent wake-up in different scenarios in accordance with an exemplary embodiment of the present disclosure;
fig. 7 is a block diagram illustrating a computing device according to an exemplary embodiment of the present disclosure.
Detailed Description
The following detailed description is provided to assist the reader in obtaining a thorough understanding of the methods, apparatus, and/or systems described herein. However, various changes, modifications, and equivalents of the methods, apparatuses, and/or systems described herein will be apparent after an understanding of the disclosure of the present application. For example, the order of operations described herein is merely an example and is not limited to those set forth herein, but may be altered as will be apparent after an understanding of the disclosure of the present application, except for operations that must occur in a particular order. Furthermore, descriptions of features known in the art may be omitted for clarity and conciseness.
The features described herein may be embodied in different forms and should not be construed as limited to the examples described herein. Rather, the examples described herein have been provided to illustrate only some of the many possible ways to implement the methods, devices, and/or systems described herein, which will be apparent after an understanding of the present disclosure.
As used herein, the term "and/or" includes any one of the listed items associated as well as any combination of any two or more.
Although terms such as "first," "second," and "third" may be used herein to describe various elements, components, regions, layers or sections, these elements, components, regions, layers or sections should not be limited by these terms. Rather, these terms are only used to distinguish one element, component, region, layer or section from another element, component, region, layer or section. Thus, a first member, first component, first region, first layer, or first portion referred to in the examples described herein may also be referred to as a second member, second component, second region, second layer, or second portion without departing from the teachings of the examples.
In the description, when an element (such as a layer, region or substrate) is referred to as being "on" another element, "connected to" or "coupled to" the other element, it can be directly "on" the other element, be directly "connected to" or be "coupled to" the other element, or one or more other elements intervening elements may be present. In contrast, when an element is referred to as being "directly on" or "directly connected to" or "directly coupled to" another element, there may be no other element intervening elements present.
The terminology used herein is for the purpose of describing various examples only and is not intended to be limiting of the disclosure. Singular forms also are intended to include plural forms unless the context clearly indicates otherwise. The terms "comprises," "comprising," and "having" specify the presence of stated features, amounts, operations, components, elements, and/or combinations thereof, but do not preclude the presence or addition of one or more other features, amounts, operations, components, elements, and/or combinations thereof.
Unless defined otherwise, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs after understanding this disclosure. Unless explicitly so defined herein, terms (such as those defined in a general dictionary) should be construed to have meanings consistent with their meanings in the context of the relevant art and the present disclosure, and should not be interpreted idealized or overly formal.
In addition, in the description of the examples, when it is considered that detailed descriptions of well-known related structures or functions will cause a ambiguous explanation of the present disclosure, such detailed descriptions will be omitted.
Hereinafter, embodiments will be described in detail with reference to the accompanying drawings. However, the embodiments may be implemented in various forms and are not limited to the examples described herein.
Fig. 1 is a flowchart illustrating a machine learning-based intelligent noise reduction method according to an exemplary embodiment of the present disclosure. For example, the method may be performed in a device providing noise reduction functionality, such as a headset, and may be performed by hardware modules (such as a processor) and/or software modules (such as an application program run by the processor) in the headset. In the following description, an earphone is described as an example of a device that provides a noise reduction function, but it should be understood by those skilled in the art that embodiments of the present disclosure are not limited thereto.
Referring to fig. 1, in step S101, an audio signal is received. For example, a user may receive sound in the form of an analog signal through a sound collector on a worn headset and then convert the analog signal to a digital signal for subsequent use.
In step S102, it is determined whether noise reduction of the sound signal is required by inputting the sound signal to the intelligent noise reduction model and receiving the user' S reaction information to the sound signal from the intelligent noise reduction model. Specifically, the sound signal received and converted into the digital form in step S101 may be input to the intelligent noise reduction model, and the intelligent noise reduction model correspondingly outputs the response information of the user to the sound signal according to the input sound signal to determine whether the noise reduction of the sound signal is required. For example, the reaction information may represent user preference and sensitivity to sound signals, etc. Whether to reduce noise of the sound can be judged according to the response information of the user to the sound signal. For example, when the reaction information indicates that the user prefers the sound, the noise reduction processing may not be performed on the sound, when the reaction information indicates that the user is more sensitive to the sound, the noise reduction processing may be performed on the sound, and when the reaction information indicates that the sound is more important or useful to the user, the noise reduction processing may not be performed on the sound. Here, whether noise reduction of sound varies from person to person and from scene to scene will be described in detail below.
In step S103, when the noise of the sound signal is required, the noise of the sound signal is reduced. Specifically, when the intelligent noise reduction model determines that noise reduction is required for a sound signal, for example, the sound signal may be neutralized by emitting a reverse sound wave.
The intelligent noise reduction model used in step S102 described above is a trained intelligent noise reduction model. Specifically, the intelligent noise reduction model is trained based on the sound signal and the brain wave response of the user to the sound signal, that is, the intelligent noise reduction model is trained by taking the sound signal and the brain wave response of the user to the sound signal as input and output samples of the intelligent noise reduction model. Here, the sound signal is collected in a large amount as an input sample, and may be collected by the sound collector on the above-mentioned earphone, or may be collected by other sound collection devices, and the collected analog sound signal is converted into a digital signal to be stored. Meanwhile, the collected sound signal is input to the ear of the user and the brain wave response of the user to the sound signal is monitored, collected and stored as an output sample, and the brain wave response may reflect the response information of the user to the sound signal, for example, as described above, the response information may represent the signal and sensitivity of the user to sound, etc., so that the intelligent noise reduction model may directly output the response information of the user to the sound signal. After the input and output samples are collected, training the intelligent noise reduction model by using the input and output samples. Training an initial intelligent noise reduction model through a large number of training sets in the early stage based on the existing deep learning algorithm before a user starts to use; after the user starts to use, the sound signals heard by the current user and brain wave responses to the sound signals are stored so as to train the model as the input and output of the noise reduction model.
Fig. 2 is a flowchart illustrating a machine learning based smart wakeup method according to an example embodiment of the present disclosure. As described above, the method may be performed in a device capable of providing a wake-up function, such as a headset, and may be performed by a hardware module (such as a processor) and/or a software module (such as an application executed by the processor) in the headset.
Referring to fig. 2, in step S201, a scene in which a user is located is determined. For example, the scene in which the user is located may be a sleeping, work, learning, driving, etc. scene.
In step S202, brain wave signals of the user are monitored. For example, a user may monitor brain wave data of the user in real time through a brain wave acquisition module on a worn earphone.
In step S203, by inputting the brain wave signal of the user to the intelligent wake-up model and receiving the user state corresponding to the brain wave signal of the user from the intelligent wake-up model, it is determined whether the user needs to be woken up according to the user state and the determined scene in which the user is located. Specifically, the intelligent wake-up model correspondingly outputs a user state corresponding to the brain wave signal of the user according to the input brain wave signal of the user, and determines whether the user needs to be waken up according to the user state and the determined scene of the user.
In step S204, the user is awakened in case the user status matches the determined scene in which the user is located. Specifically, the user may be awakened, for example, by an alarm.
The smart wakeup model used in step S203 described above is a smart wakeup model that has been trained. Specifically, the intelligent wake-up model is trained according to the scene of the user, and the intelligent wake-up model is trained according to the brain wave signals of the user and the user states corresponding to the brain wave signals confirmed through user feedback, namely, the intelligent wake-up model is trained according to the scene of the user by taking the brain wave signals of the user and the user states corresponding to the brain wave signals confirmed through user feedback as input and output samples of the intelligent wake-up model. The brain wave signals of the user are taken as input samples to be collected in a large quantity, and can be collected through the brain wave collecting module on the earphone, and also can be collected through other brain wave collecting modules, and the collected brain wave signals are stored. The brain wave signal can reflect the corresponding information of the user state (such as drowsiness and sufficient sleep) and the like, and whether the user is awakened can be judged according to the matching of the user state and the scene, for example, when the scene where the user is located needs the attention of the user, if the brain wave signal corresponds to the drowsiness signal, the user is awakened; when the scene where the user is located needs to rest, if the brain wave signal corresponds to insufficient sleep, the user is not awakened, if the brain wave signal corresponds to sufficient sleep, the user can be awakened, and the intelligent awakening model is trained according to feedback of the user on the awakening (for example, whether the user state corresponding to the brain wave signal fed back by the user is accurate, whether the awakening of the user is reasonable or not, etc.), specifically whether the awakening of the user is different from person to person and from scene to scene, and the detailed description will be made below. And after the input and output samples are collected, training the intelligent wake-up model by using the input and output samples. Before a user starts to use, training an initial intelligent wake-up model through a large number of training sets in the early stage according to different scenes based on the existing deep learning algorithm; during the use of the user, aiming at the scene of the user, monitoring the brain wave information of the user, calculating whether the user should be awakened currently according to the intelligent awakening model in real time, providing feedback for the user, taking the brain wave signals and the user states corresponding to the brain wave signals confirmed by the feedback of the user as the input and output of the model, and continuing training the model.
Fig. 3 is a block diagram illustrating a machine learning based intelligent noise reducer according to an exemplary embodiment of the present disclosure.
Referring to fig. 3, the intelligent noise reduction device based on machine learning includes: a sound receiving module 31, a noise reduction determining module 32 and an intelligent noise reduction module 33.
The sound receiving module 31 is configured to receive the sound signal, the noise reduction determining module 32 is configured to determine whether noise reduction of the sound signal is required by inputting the sound signal to the intelligent noise reduction model and receiving the user's reaction information to the sound signal from the intelligent noise reduction model, and the intelligent noise reduction module 33 is configured to noise reduce the sound signal if noise reduction of the sound signal is required.
Fig. 4 is a block diagram illustrating a machine learning based intelligent wake-up device according to an exemplary embodiment of the present disclosure.
Referring to fig. 4, the intelligent wake-up apparatus based on machine learning includes: a scene determination module 41, an electroencephalogram monitoring module 42, a wake-up determination module 43, and an intelligent wake-up module 44.
The scene determination module 41 is configured to determine a scene in which the user is located, the brain wave monitoring module 42 is configured to monitor brain wave signals of the user, the wake-up determination module 43 is configured to determine whether the user needs to be waken according to the user state and the determined scene in which the user is located by inputting the brain wave signals of the user to the intelligent wake-up model and receiving user states corresponding to the brain wave signals of the user from the intelligent wake-up model, and the intelligent wake-up module 44 is configured to wake up the user if the user states match the determined scene in which the user is located.
The intelligent noise reduction method and the intelligent wake-up method can be applied to various scenes such as sleeping, entertainment, work, learning, driving, conversation and the like, and respective intelligent noise reduction models and intelligent wake-up models are trained for each business scene. The intelligent noise reduction model and the intelligent wake-up model can be trained on the intelligent terminal APP. The intelligent terminal APP mainly has the following functions: training data of the model is obtained from a memory of equipment such as a headset and the like, and the data is transmitted to a background server for training; downloading the trained model from a background server and transmitting the model to equipment such as headphones; providing a feedback function for the user, wherein the feedback of the user is used as training data of the model to further optimize and train the model; the functions of setting, displaying the using condition of the earphone and the like are provided for a user. The background server provides a model building and training function for the intelligent noise reduction and intelligent wake-up model.
Fig. 5 is a flowchart illustrating intelligent noise reduction in different scenarios according to an exemplary embodiment of the present disclosure. The user uploads the sound sources which need to be shielded and do not need to be shielded in the terminal APP in advance. The sound source data is synchronized to the memory of the device such as the earphone. When equipment such as headphones is used, sound signals are acquired through the sound collector and converted into digital signals. The processor compares the sound with the sound source which needs to be shielded and the sound source which does not need to be shielded through a sound similarity algorithm, judges whether the sound needs to be shielded or not, and carries out corresponding processing. If the transmitted sound source is unknown, namely, is not a preset sound source, judging whether the sound signal needs to be noise reduced or not through an intelligent noise reduction model, and storing the sound signal and corresponding brain wave information into a memory. If the intelligent noise reduction model judges that the noise needs to be reduced, the noise reduction module is mobilized to reduce the noise, otherwise, the noise is not processed. The stored sound signals and brain wave information are automatically synchronized to the intelligent terminal APP after the earphone is connected to the intelligent terminal. The terminal APP can transmit data to the background server, and training is carried out on the noise reduction model so as to meet personalized requirements.
For example, in a sleeping scene, electric drill sound, snore sound and the like are added into a sound source library which needs shielding, and telephone sound, alarm sound and the like are added into a sound source library which cannot be shielded; when the user sleeps, the electric drill sound can be directly shielded, and the alarm sound is not processed. If the transmitted sound source is unknown, namely the sound source is not the preset sound source, judging whether the sound signal needs to be reduced by the intelligent noise reduction model, for example, if the user A is sensitive to the snore during sleeping, the intelligent noise reduction model judges that the snore needs to be shielded, if the user B is insensitive to the snore during sleeping, the intelligent noise reduction model judges that the snore does not need to be shielded, and stores the snore and brain wave responses of the user to the snore so as to train the intelligent noise reduction model.
For example, in the working and learning scenes, talking, alarm sounds and the like are added into a sound source library to be shielded, and sounds of names of others, alarm sounds and the like are added into a sound source library which cannot be shielded; when the user works and learns, the sound of the questions and laughing sound of the colleagues or colleagues can be directly shielded, and the sound of the names of others and the alarm sound are not processed. For example, in the situations of listening to music and talking, surrounding noisy environmental sounds (such as background sounds in railway stations, subways, airplanes and the like) are added into a sound source library which needs to be shielded, and alarm sounds and the like are added into a sound source library which cannot be shielded; when a user listens to music and calls, noisy ambient sounds can be directly shielded, and alarm sounds are not processed.
Fig. 6 is a flowchart illustrating intelligent wake-up in different scenarios according to an exemplary embodiment of the present disclosure. And storing the trained original intelligent wake-up to equipment such as headphones, wherein in the use process of a user, aiming at the scene of the user, the brain wave sensor can continuously monitor the brain wave of the user, and calculating whether the alarm is started currently to wake up the user according to the intelligent wake-up model, if the calculation result is negative, continuing monitoring, otherwise starting the alarm to wake up the user. The user can feed back whether the intelligent wake-up function is intelligent enough through the intelligent terminal APP, and the fed-back data can be uploaded to the background server again to train the model.
For example, in a sleep scenario, when the monitored brain wave information of the user reflects that the sleep of the user is insufficient, the user is not awakened, and if the brain wave signal is sufficient for sleep, an alarm can be started to awaken the user. For example, in a scenario where the user is required to concentrate on working, learning, driving, etc., when the monitored brain wave information of the user reflects that drowsiness of the user occurs, an alarm is started to wake up the user.
In addition to the above mentioned scenarios, other scenarios such as indoor and vehicle-mounted active noise reduction equipment, sound insulation walls, construction, riding, etc. using headphones or other scenarios requiring intelligent noise reduction and intelligent wake-up functions may apply the intelligent noise reduction method and the intelligent wake-up method of the embodiments of the present disclosure.
Further, according to an exemplary embodiment of the present disclosure, there is also provided a computer-readable storage medium having stored thereon a computer program which, when executed, implements the machine learning-based intelligent noise reduction method or the machine learning-based intelligent wake-up method according to the present disclosure.
By way of example, the computer readable storage medium may carry one or more programs which when executed implement the steps of: receiving a sound signal; determining whether noise reduction is required for the sound signal by inputting the sound signal into the intelligent noise reduction model and receiving response information of a user to the sound signal from the intelligent noise reduction model; in the case where noise reduction is required for the sound signal, noise reduction is performed for the sound signal.
By way of example, the computer readable storage medium may carry one or more programs which when executed implement the steps of: monitoring brain wave signals of a user; determining whether a user needs to be awakened or not by inputting brain wave signals of the user into the intelligent awakening model and receiving user states corresponding to the brain wave signals of the user from the intelligent awakening model; in case the user needs to be awakened, the user is awakened.
The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In embodiments of the present disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. A computer program embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing. The computer readable storage medium may be embodied in any device; or may exist alone without being assembled into the device.
Fig. 7 shows a schematic diagram of a computing device according to an exemplary embodiment of the present disclosure.
Referring to fig. 7, a computing device 7 according to an exemplary embodiment of the present disclosure includes: the processor 71 and the memory 72, wherein the memory 72 stores a computer program which, when executed by the processor, implements the machine learning-based intelligent noise reduction method or the machine learning-based intelligent wake-up method as in the above-described exemplary embodiments.
By way of example, the following steps may be implemented when the computer program is executed by a processor: receiving a sound signal; determining whether noise reduction is required for the sound signal by inputting the sound signal into the intelligent noise reduction model and receiving response information of a user to the sound signal from the intelligent noise reduction model; in the case where noise reduction is required for the sound signal, noise reduction is performed for the sound signal.
By way of example, the following steps may be implemented when the computer program is executed by a processor: determining a scene where a user is located; monitoring brain wave signals of a user; the brain wave signals of the user are input into the intelligent awakening model, the user state corresponding to the brain wave signals of the user is received from the intelligent awakening model, and whether the user needs to be awakened or not is determined according to the user state and the determined scene of the user; and waking up the user in the case that the user state matches the determined scene in which the user is located.
Computing devices in embodiments of the present disclosure may include, but are not limited to, devices such as mobile phones, notebook computers, PDAs (personal digital assistants), PADs (tablet computers), desktop computers, and the like. The computing device illustrated in fig. 7 is merely an example and should not be taken as limiting the functionality and scope of use of embodiments of the present disclosure.
The intelligent noise reduction method, the intelligent wake-up method, and the apparatus using the same according to the exemplary embodiments of the present disclosure have been described above with reference to fig. 1 to 4. However, it should be understood that: the machine learning-based intelligent noise reducer, the machine learning-based intelligent wake-up device, and the units thereof shown in fig. 3-4, respectively, may be configured as software, hardware, firmware, or any combination thereof that perform specific functions, the computing device shown in fig. 7 is not limited to include the components shown above, but some components may be added or deleted as needed, and the above components may also be combined.
According to the intelligent noise reduction method, the intelligent wake-up method and the device using the method, according to the brain wave response training optimization intelligent noise reduction model of different users on the sound, the problems that different users prefer the same sound and have different sensitivity are solved well; according to the feedback training optimization intelligent wake-up model of different users on wake-up under different scenes, the problem that the different users have different wake-up demands under different scenes is well solved, and therefore personalized demands of the different users are met.
While the present disclosure has been particularly shown and described with reference to exemplary embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present disclosure as defined by the following claims.

Claims (9)

1. An intelligent noise reduction method based on machine learning, comprising:
training an intelligent noise reduction model based on brain wave signals for each scene;
determining a scene where a user is located;
receiving a sound signal;
when a sound source of a sound signal is included in a preset sound source library of the scene, determining whether noise reduction of the sound signal is required based on a preset process corresponding to the sound source;
determining whether noise reduction of the sound signal is required by inputting the sound signal into the intelligent noise reduction model and receiving response information of a user to the sound signal from the intelligent noise reduction model when a sound source of the sound signal is not included in a preset sound source library of the scene;
in the case where noise reduction is required for the sound signal, noise reduction is performed for the sound signal.
2. The machine learning based intelligent noise reduction method of claim 1, wherein the intelligent noise reduction model is trained based on the sound signal and brain wave response of the user to the sound signal.
3. The intelligent noise reduction method based on machine learning of claim 1, wherein the sound signal is received through headphones.
4. The intelligent noise reduction method based on machine learning of claim 2, wherein the user's reaction information to the sound signal includes user's preference and sensitivity to the sound signal.
5. The machine learning based intelligent noise reduction method of claim 2, wherein training the intelligent noise reduction model comprises training the intelligent noise reduction model by taking a sound signal and a brain wave response of a user to the sound signal as input and output samples of the intelligent noise reduction model.
6. The machine learning based intelligent noise reduction method of claim 1 wherein noise reducing the sound signal includes emitting a reverse sound wave to neutralize the sound signal.
7. An intelligent noise reduction device based on machine learning, comprising:
a model training module configured to train an intelligent noise reduction model based on brain wave signals for each scene;
the scene determining module is configured to determine a scene in which a user is located;
a sound receiving module configured to receive a sound signal;
a noise reduction determination module configured to: determining whether noise reduction of the sound signal is required based on a preset process corresponding to the sound source when the sound source of the sound signal is included in the preset sound source library of the scene, and determining whether noise reduction of the sound signal is required by inputting the sound signal into the intelligent noise reduction model and receiving response information of a user to the sound signal from the intelligent noise reduction model when the sound source of the sound signal is not included in the preset sound source library of the scene;
the intelligent noise reduction module is configured to reduce noise of the sound signal under the condition that the noise of the sound signal is required to be reduced.
8. A computer readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the intelligent noise reduction method based on machine learning of any one of claims 1 to 6.
9. A computing device, comprising:
a processor;
a memory storing a computer program which, when executed by a processor, implements the intelligent noise reduction method based on machine learning as claimed in any one of claims 1 to 6.
CN202010972698.9A 2020-09-16 2020-09-16 Intelligent noise reduction method, intelligent wake-up method and device using same Active CN112130664B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010972698.9A CN112130664B (en) 2020-09-16 2020-09-16 Intelligent noise reduction method, intelligent wake-up method and device using same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010972698.9A CN112130664B (en) 2020-09-16 2020-09-16 Intelligent noise reduction method, intelligent wake-up method and device using same

Publications (2)

Publication Number Publication Date
CN112130664A CN112130664A (en) 2020-12-25
CN112130664B true CN112130664B (en) 2023-07-21

Family

ID=73846808

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010972698.9A Active CN112130664B (en) 2020-09-16 2020-09-16 Intelligent noise reduction method, intelligent wake-up method and device using same

Country Status (1)

Country Link
CN (1) CN112130664B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112992153B (en) * 2021-04-27 2021-08-17 太平金融科技服务(上海)有限公司 Audio processing method, voiceprint recognition device and computer equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106725462A (en) * 2017-01-12 2017-05-31 兰州大学 Acousto-optic Sleep intervention system and method based on EEG signals
CN109316170A (en) * 2018-11-16 2019-02-12 武汉理工大学 Brain wave assisting sleep and wake-up system based on deep learning
CN109686367A (en) * 2018-12-17 2019-04-26 科大讯飞股份有限公司 A kind of earphone noise-reduction method, device, equipment and readable storage medium storing program for executing
CN110753922A (en) * 2017-12-07 2020-02-04 深圳市柔宇科技有限公司 Emotion-based content recommendation method and device, head-mounted device and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106725462A (en) * 2017-01-12 2017-05-31 兰州大学 Acousto-optic Sleep intervention system and method based on EEG signals
CN110753922A (en) * 2017-12-07 2020-02-04 深圳市柔宇科技有限公司 Emotion-based content recommendation method and device, head-mounted device and storage medium
CN109316170A (en) * 2018-11-16 2019-02-12 武汉理工大学 Brain wave assisting sleep and wake-up system based on deep learning
CN109686367A (en) * 2018-12-17 2019-04-26 科大讯飞股份有限公司 A kind of earphone noise-reduction method, device, equipment and readable storage medium storing program for executing

Also Published As

Publication number Publication date
CN112130664A (en) 2020-12-25

Similar Documents

Publication Publication Date Title
TWI551267B (en) Sleep aid system and operation method thereof
US9609441B2 (en) Smart hearing aid
EP3127116B1 (en) Attention-based dynamic audio level adjustment
CN110870201B (en) Audio signal adjusting method, device, storage medium and terminal
US7689274B2 (en) Brain-wave aware sleep management
CN111558120B (en) Sleep wake-up method, device, computer equipment and computer readable storage medium
US20180152163A1 (en) Noise control method and device
CN106847307B (en) Signal detection method and device
WO2019033987A1 (en) Prompting method and apparatus, storage medium, and terminal
CN109672775B (en) Method, device and terminal for adjusting awakening sensitivity
CN113038337B (en) Audio playing method, wireless earphone and computer readable storage medium
CN109067965B (en) Translation method, translation device, wearable device and storage medium
US20180122025A1 (en) Wireless earpiece with a legal engine
CN110830866A (en) Voice assistant awakening method and device, wireless earphone and storage medium
CN112130664B (en) Intelligent noise reduction method, intelligent wake-up method and device using same
CN205282093U (en) Audio player
Triyono et al. VeRO: Smart home assistant for blind with voice recognition
CN114071308A (en) Earphone self-adaptive tuning method and device, earphone and readable storage medium
KR20110007355A (en) Hearingaid system using mobile terminal and fitting method thereof
CN108762644A (en) Control method and device and earphone for terminal
CN111163219A (en) Alarm clock processing method and device, storage medium and terminal
CN111312243A (en) Equipment interaction method and device
CN107957860A (en) Can adjust automatically sound output method and electronic device
CN113990363A (en) Audio playing parameter adjusting method and device, electronic equipment and storage medium
CN112188341B (en) Earphone awakening method and device, earphone and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant