NL2016471B1 - Teeth grinding detection. - Google Patents

Teeth grinding detection. Download PDF

Info

Publication number
NL2016471B1
NL2016471B1 NL2016471A NL2016471A NL2016471B1 NL 2016471 B1 NL2016471 B1 NL 2016471B1 NL 2016471 A NL2016471 A NL 2016471A NL 2016471 A NL2016471 A NL 2016471A NL 2016471 B1 NL2016471 B1 NL 2016471B1
Authority
NL
Netherlands
Prior art keywords
grinding
candidate
event
program product
computer program
Prior art date
Application number
NL2016471A
Other languages
Dutch (nl)
Inventor
Allessie Michiel
Kolchygin Bohdan
Original Assignee
Bruxlab B V
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bruxlab B V filed Critical Bruxlab B V
Priority to NL2016471A priority Critical patent/NL2016471B1/en
Application granted granted Critical
Publication of NL2016471B1 publication Critical patent/NL2016471B1/en

Links

Landscapes

  • Telephone Function (AREA)

Abstract

The disclosure relates to a computer program product for detecting teeth grinding of a person. The computer program performs noise filtering of respective portions of the sound signal to determine a candidate grinding portion of the sound signal and detecting one or more candidate grinding events in the candidate grinding portion. An event feature vector of features of the candidate grinding event is generated and fed to a neural network trained for teeth grinding detection. Teeth grinding is detected for the candidate grinding event on the basis of at least one output of the neural network.

Description

Teeth grinding detection
FIELD OF THE INVENTION
The invention relates to the field of detecting sleep disorders, in particular teeth grinding. More specifically, the invention relates to a computer program product comprising instructions, when run on a computer, for detecting teeth grinding.
BACKGROUND
Bruxism involves teeth grinding and clenching of teeth in an abnormal excessive and non-functional nocturnal and subconscious manner. Sleep bruxism is the condition wherein teeth grinding and clamping occur during sleep of a person. The sound produced by the teeth grinding is often heard by a sleeping partner but not by the grinding person himself.
At a minimum, bruxism will typically result in excessive tooth wear and periodontal problems. Unfortunately, in many cases, bruxing action not only damages the teeth, but also the supporting structure of the teeth including both the hard bony material and the soft tissue. As a result, in more extreme cases, these disorders lead to temporomandibular disorders, jaw displacement, stiff neck and severe headaches.
One recognized diagnostic technique for obtaining an accurate sleep bruxism diagnosis involves polysomnographic (PSG) sleep diagnosis. PSG uses a plurality of sensors and electrodes applied to a person for registering physiological parameters (brain activity, eye movement) of the person during sleep in combination with electromyographical (EMG) measurements. If audio- and video registration is performed simultaneously with the PSG diagnosis, other nocturnal oral sounds can be distinguished from the bruxism relates measurements. If a person is diagnosed to have four or more bruxism events per hour of sleep, the person is recognized as a bruxer.
Obviously, such diagnostic techniques are laborious and troublesome to the person. Often, to obtain a bruxism diagnosis, the person needs to stay in a dedicated PSG facility.
Mobile health is a current trend in health care and holds the promise that mobile devices already owned by a user can be used to detect health issues. WO 2004/087258 discloses an apparatus to be worn on a person's head for detecting bruxism. The apparatus involves a plurality of electrodes to measure muscle activity of the person. Measurements may be performed by means of sound via contact microphones. The microphone is used for the detection of frequencies generated by bruxism. WO 2011/150362 discloses a mobile device, e.g. a smart phone, comprising a microphone to detect sounds, including bruxism sounds. The mobile device can be used to detect audible sounds from a person in a normal sleep environment and determine whether the audible sounds are indicative of normal sleep or a sleep disorder.
Said publications are examples of mobile health applications using sensors currently existing in common mobile computer devices, such as smart phones and tablet computers. However, whereas the disclosed method may provide an indication to an owner of a mobile device to visit a dentist to further investigate whether or not the person is a bruxer, the disclosed techniques are insufficient for a dentist, or other health professional, to obtain a reliable bruxism detection from the measurements performed by the mobile device.
SUMMARY
It is an object of the present disclosure to present a computer program product and method to detect teeth grinding in a more reliable manner.
It is a further objection of the present disclosure to present a computer program product and method to obtain a sleep bruxism diagnosis only on the basis of a sound signal detected by a mobile device.
To that end, a computer program product is disclosed for detecting teeth grinding of a person. The computer program product may be a mobile device storing instructions (e.g. an application or app) as described below. The computer program product may also comprise a medium, e.g. a non-transitory medium, storing these instructions. The computer program product comprises instructions for a computer, which instructions, when executed by the computer, cause the computer to process a sound signal from the person.
The processing includes noise filtering of respective portions of the sound signal to determine a candidate grinding portion of the sound signal. Noise filtering may exclude portions only comprising noise, e.g. background noise, from further processing, thereby saving processing and memory capacity at the mobile device. Processing of the sound signal portion by portion enables further processing of a sound signal portion while recording the sound signal from the person.
The processing further includes detecting one or more candidate grinding events in the candidate grinding portion. Candidate grinding events may be determined using signal grouping and/or threshold detection. Events may be excluded from being candidate grinding events on the basis of certain characteristics. One example includes periodic signals during a time interval, e.g. during a portion of a signal. Such a periodic signal may be a sound related to snoring.
The processing further includes generating an event feature vector of features of a candidate grinding event and feeding the event feature vector to a neural network trained for teeth grinding detection. The event feature vector contains features of characteristics of sound signals for which teeth grinding and/or non-teeth grinding has been determined. The neural network is trained with these data. The neural network is preferably contained in the mobile device, but may also be accessible remote from the mobile device using a network connection .
One or more outputs of the neural network provides one or more values indicating whether the candidate grinding event should be classified as a teeth grinding event. It should be noted that the one or more outputs may be further processed before being displayed to a person. One example includes that the grinding rate per hour may be calculated and displayed to the person .
The disclosed computer program product enables a mobile device to be used for teeth grinding detection by a pre-processing stage wherein noise filtering and the detection of candidate grinding event occurs in order to minimize resource usage of the mobile device. The neural network is trained with sufficient event feature vectors to obtain an accurate teeth grinding detection enabling health professionals to diagnose for teeth grinding.
In one embodiment, the processing instructions involve noise filtering wherein a noise coefficient is calculated for the signal portion. The noise coefficient is compared with a threshold and the portion of the sound signal is only processed when the noise coefficient meets the threshold. Preferably, in one embodiment, the portion of the sound signal has a duration less than 5 minutes, preferably less than 2 minutes, e.g. 1 minute or 30 seconds. Said limitations to the duration of the sound portion provide for efficient resource use at the mobile device. In particular, the embodiments enable continuous recording of a sound signal in a mobile device for a considerable period of time (e.g. a night of sleep) while meeting memory limitations in the mobile device.
In one embodiment, the detection of candidate grinding events comprises processing instructions assigning a plurality of peaks of the sound signal in the candidate grinding portion to one candidate grinding event on the basis of the distance between peaks. The threshold distance to distinguish whether or not peaks of the sound signal should be combined to a single candidate grinding event can be determined by a health professional. It should be noted that the distance between peaks may be expressed in any convenient unit, e.g. as a time, as a number of samples, etc. Additionally or alternatively, the detection of candidate grinding event involves determining whether a signal meets a noise threshold.
In one embodiment, the features of the event feature vector include a plurality of signal parameters, e.g. one or more or all of: - signal energy - spectral centroid - spectral flatness - zero crossing rate - average magnitude - spectral skewness - spectral kurtosis - mean of spectral flux - standard deviation of spectral flux
The applicant has found that these signal parameters can be used to determine whether or not a candidate grinding event relates to teeth grinding.
In one embodiment, the features of the feature vector include one or more Mel frequency cepstrum coefficients (MFCC), and, optionally:
- maximum MFCC
- minimum MFCC
- mean MFCC
- standard deviation MFCC
Mel frequency cepstrum coefficients are used for automatic speech recognition applications and were found by the applicant to be useful for teeth grinding detection as well.
In one embodiment, the features of the feature vector include custom features, e.g. a rate feature that describes the relation between a loudest part of the candidate event and the silence level of the candidate grinding portion. Such user determined features enable further improvements of the teeth grinding detection.
In one embodiment, the detection of teeth grinding comprises outputting by the neural network of non-normalized outputs comprising a first number indicating grinding activation and a second number indicating non-grinding activation. The two outputs of the neural network, a grinding neuron activation resp. a non-grinding neuron activation, provide the two numbers representing non-normalized probabilities of teeth grinding resp. non-teeth grinding for a particular candidate event. The double output of the neural network provides more flexibility in the teeth grinding processing, e.g. during training of the neural network and provides more information for improving the neural network performance. For example, in one embodiment, a relation of the first number and the second number, e.g. the difference of the first number and the second number, to a certain decision threshold determines whether or not a candidate grinding event is a teeth grinding event.
In one embodiment, the processing instructions process further inputs for detecting teeth grinding. These further inputs can be used for pre-processing to further reduce the number of candidate grinding events for which event features are fed to the neural network and/or for post-processing, i.e. after having obtained the one or more outputs from the neural network. In one example, the mobile device may run other sleep disorder detection algorithm, e.g. a detection algorithm detecting snoring.
As another example, an input signal may be used from other teeth grinding detection devices to further support the classification of an event as a teeth grinding event. One example includes a device to be attached to the user to determine bruxism related muscle events and signal these to the mobile device. The non-prepublished US patent application 14,817,252 in the name of the inventor discloses such a system for the detection of bruxism. The system comprises at least two sensor devices arranged for being attached to the skin of a patient, a first sensor device of the at least two sensor devices arranged for being attached to the skin covering a left masseter muscle of the patient and a second sensor device of the at least two sensor devices arranged for being attached to the skin covering a right masseter muscle of the patient. Each of the at least at least two sensor devices comprises an accelerometer arranged for measuring acceleration of the sensor device and a processor module arranged for processing detection signals of the accelerometer. The processor module is arranged for transmitting the processed signals to a wireless transceiver, the wireless transceiver being arranged for communicating the processed signals to a remote processor unit, e.g. the mobile device. The system comprises a power source arranged for providing power for the system. The mobile device may use the system measurements to classify a candidate grinding event as a teeth grinding event with increased certainty.
In one embodiment, the computer program product is contained in a mobile consumer computer, e.g. a smart phone, a tablet computer or a laptop computer, comprising at least one sound sensor. The at least one sound sensor obtains the sound signal from a person. The mobile consumer computer enables measurements at the home person to enable diagnosis of teeth grinding. The sound sensor may be microphone, e.g. a non-con-tact microphone, of the mobile consumer computer. The microphone is positioned at a distance of 10 centimetre to 1 meter from the person. In one embodiment, the mobile consumer computer comprises a display and the instructions include instructions for displaying information relating to the detection of teeth grinding on the display.
One further aspect of the disclosure relates to a computer-implemented method comprising executing one or more of instructions contained in the computer program product as described above. A still further aspect of the disclosure relates to a mobile consumer computer comprising the computer program product as described above and at least one processor configured for executing the instructions stored on the computer program product. The mobile consumer computer may be a smart phone, a tablet computer or a laptop computer, comprising at least one sound sensor. In one embodiment, the computer program product includes the neural network trained for teeth grinding detection.
It is noted that the invention relates to all possible combinations of features recited in the claims.
BRIEF DESCRIPTION OF THE DRAWINGS
Aspects of the invention will be explained in greater detail by reference to exemplary embodiments shown in the drawings, in which: FIGS. 1A and IB are schematic illustrations of a mobile device obtaining a teeth grinding detection application from a server and the mobile device in use when executing the teeth grinding detection application; FIG. 2 is a flow chart illustrating stages of the processing for detecting teeth grinding; FIG. 3 is a schematic diagram of a mobile device showing processing stages for detecting teeth grinding; FIGS. 4A-4C is a more detailed flow chart illustrating processing steps for detecting teeth grinding (FIG. 4A) and embodiments of implementing these processing steps (FIGS. 4B-4D).; FIG. 5 illustrates an example of a sound signal portion and the processing thereof; FIG. 6 illustrates another example of a sound signal portion and results of the teeth grinding detection; and FIG. 7 is a schematic block diagram of a general system, such as a mobile consumer device, configured for detecting teeth grinding.
DETAILED DESCRIPTION OF THE DRAWINGS FIG. 1A is a schematic illustration of a mobile consumer device 1, e.g. a smart phone, contacting a server system S in network N storing a computer program product 2 (e.g. an app) containing instructions for detecting teeth grinding. Mobile device 1 is configured for downloading, storing and executing the computer program product 2 in a manner known as such. FIG. IB is a schematic illustration of the mobile device 1 being positioned at a distance D from a person P.
Distance D is in the order of 10 cm to 1 meter.
Person P is asleep and has activated the app 2 before sleeping in order to monitor whether he suffers from teeth grinding. To that end, mobile device 1 has stored the app 2 in a memory 3 and contains at least one sound sensor 4 feeding a sound signal from person P to processor 5 executing the app 2. Mobile device 1 contains a display 6 for displaying information from the app 2. Further details of mobile device 1 will be described with reference to FIG. 7. FIG. 2 is a flow chart illustrating stages of the processing of a computer-implemented method for detecting teeth grinding. A portion of a sound signal SOUND is fed to a noise filtering stage SI to determine a candidate grinding portion CGP of the sound signal. Noise filtering may exclude sound signal portions only comprising noise, e.g. background noise, from further processing, thereby saving processing and memory capacity at the mobile device 1. Processing of the sound signal in a por-tion-by-portion fashion enables further processing of a sound signal portion while continuing recording the sound signal from the person P.
In a second stage S2, one or more candidate grinding events CGE are detected in the candidate grinding portion CGP. Candidate grinding events CGE may be determined using signal grouping and/or threshold detection. Events may be excluded from being candidate grinding events on the basis of certain characteristics. One example includes periodic signals during a time interval, e.g. during a portion of a signal. Such a periodic signal may be a sound related to snoring.
In a third stage S3, the processing further includes generating an event feature vector of features of a candidate grinding event CGE and feeding the event feature vector to a neural network trained for teeth grinding detection. The event feature vector contains features of characteristics of sound signals. The neural network is trained with the characteristics for which teeth grinding and/or non-teeth grinding has been determined .
One or more outputs G/NG of the neural network provides one or more values indicating whether the candidate grinding event should be classified as a teeth grinding event.
Optionally, the teeth grinding detection stages may use further inputs I from other detection stages S4. As shown in FIG. 2, these further inputs I can be used for pre-processing (i.e. prior to stage S3) to further reduce the number of candidate grinding events CGE for which event features are fed to the neural network and/or for post-processing, i.e. after having obtained the one or more outputs from the neural network (after stage S3). The input I may be used to increase the accuracy of the teeth grinding detection.
In one example, the mobile device 1 may run another sleep disorder detection algorithm, e.g. a detection algorithm detecting snoring. If the instructions find from further input I that e.g. a particular candidate grinding event CGE was also found to be detected as a snoring event (e.g. by comparing the time window for the events), the probabilities for the event to be a teeth grinding event and to be a snoring event may be compared to finally classify the event as a grinding event or a snoring event. As another example, an input signal I may be used from other teeth grinding detection devices to further support the classification of an event as a teeth grinding event. One example includes a device to be attached to the user to determine bruxism related muscle events around the mouth of the person P and signal these to the mobile device 1. The non-pre-published US patent application 14,817,252 in the name of the inventor discloses such a system for the detection of bruxism and is incorporated in the present application by reference. The system comprises at least two sensor devices arranged for being attached to the skin of a patient communicating muscle movements to the mobile device using accelerometers. The mobile device 1 may use the system measurements of stage S4 to classify a candidate grinding event as a teeth grinding event with increased certainty . FIG. 3 is a schematic illustration of a mobile device showing stages S1-S3 as discussed with reference to FIG. 2.
Sound sensor 4 detects sound from person P and produces a sound signal fed to stage SI. The sound signal is cut into sound portions, also referred to as chunks. A sound portion is the basis unit for processing in stages S1-S3 and has a duration of less than 5 minutes. In the present application, the duration of a sound portion is taken as 30 seconds.
If a portion of a sound signal is detected to contain more than noise, such a candidate grinding portion CGP is fed to stage S2. Stage S2 is used for detecting one or more candidate grinding events CGE in the candidate grinding portion CGP. For an event in the candidate grinding portion CGP to be recognized as a candidate grinding event CGE, one or more criteria apply as will be discussed below in further detail with reference to FIGS. 4-6.
Stage S3 contains a neural network. Vectors V(CGE) of features of each detected candidate grinding event CGE are fed to the neural network for detecting events amongst the candidate grinding events CGE that should be classified as grinding events G and events that should be classified as non-grinding events NG.
The features of the event feature vector include a plurality of signal parameters - signal energy (sum of the squares of the amplitude) - spectral centroid (mass centre of the sound spectrum) - spectral flatness - zero crossing rate (amount of sign changes of amplitude) - average magnitude (average value of absolute amplitude) - spectral skewness (spectral asymmetry) - spectral kurtosis (spectral tailness) - mean of spectral flux (mean timbre of the signal) - standard deviation of spectral flux - one or more (e.g. the first ten) Mel frequency cepstrum coefficients (MFCC),
- maximum MFCC
- minimum MFCC
- mean MFCC - standard deviation MFCC; and - one or more custom features, such as a rate feature that describes the relation between a loudest part of the candidate event and the silence level of the candidate grinding portion.
Further processing stages S5 may be used before presenting information representing the output of the neural network in stage S3 on the display 6.
It should be noted that one or more of the stages S1-S3 and S4 may be implemented as program code executed by processor 5 of mobile device 1. At least stages S1-S3 are part of the computer program product 2 obtained from server system S in the network N. FIG. 4A is a more detailed flow chart illustrating some processing steps taken for each of the stages in order to obtain an accurate teeth grinding detection. The steps will be explained with reference to FIG. 5 showing an example of the processing steps.
In a first step, S100, the sound obtained via sound sensor 4 of mobile device 1 is cut into sound signal portions as the basis data unit for the further processing steps. A small sound signal portion SOUND is shown in the upper diagram as the noisy black signal.
In a second step S102, a noise coefficient is calculated using a histogram of signal energy or signal magnitudes as shown in the lower diagram of FIG. 5. The histogram has a plurality of bins on the horizontal axis. Each bin represents a range of signal energies or signal amplitudes found in the sound signal portion. The energy values are depicted on the horizontal axis of the upper diagram. It should be noted that more bins may be used for the histogram covering smaller ranges and that bins should not necessarily cover an equal range.
As can be observed from the histogram, the majority of the energy values or signal magnitude values is located in the lowest, first bin. A base level BL is selected in the histogram. The base level may e.g. be selected to be the first energy value or magnitude value of the second bin, as shown in FIG. 5, or the last energy value or magnitude value of the second bin. Then the noise coefficient NC is calculated as the ratio of the maximum signal energy or signal magnitude and the base level. A noise coefficient threshold is set and the noise coefficient is compared with the threshold in step S104.
If the noise coefficient NC does not meet the threshold, further processing of the sound signal portion is skipped (step S106) and the process is restarted for the next sound signal portion as shown by step S108.
If the noise coefficient NC does meet the threshold in step S104, the sound signal portion is identified as a candidate grinding portion CGP. The candidate grinding portion CGP is a sound signal portion that does not only contain noise but may contain candidate grinding events CGE. This is the case for the sound signal in FIG. 5. FIG. 4B provides an embodiment for a more detailed scheme for performing steps S100 - S108. It is noted that in the present embodiment, if the mobile device 1 contains multiple sound sensors 4, multiple channels provide sound signal portions that are processed individually, i.e. mono.
The candidate grinding portion is fed to an optional Wien filter in step S110. While the Wien filter may delay signal processing, the accuracy for teeth grinding detection may be improved as well. In FIG. 5, the grey portions within the sound signal portion, some of which are indicated as 'W', show the result of the application of the Wien filter.
Processing steps S112 and S114 concern obtaining candidate grinding events CGE.
Step S112 involves assigning a plurality of peaks of the sound signal in the candidate grinding portion CGP to one candidate grinding event on the basis of the distance between peaks. The threshold distance, also referred to as screen length, to distinguish whether or not peaks of the sound signal should be combined to a single candidate grinding event can be determined by a health professional. It should be noted that the distance between peaks may be expressed in any convenient unit, e.g. as a time, as a number of samples, etc.
In one exemplary embodiment, a small percentage (a sensitivity parameter) of a number of the highest samples is taken and collected in an array of peaks, referred to as 'Peaks'. For example, 0,25% of 1323000 samples may be taken to obtain an array Peaks of 33075 of the highest samples. The highest sample of the array Peaks is taken as the centre of the candidate grinding event CGE. If some sample of Peaks is closer than a predetermined screen length of, for example, 8192 samples in any direction, the sample is added to a group 'Group' and the next 8192 samples are verified from the added sample. When there are no new samples from Peaks inside the screen length, the group Group is closed. The centre of the associated candidate grinding event CGE has a centre in the highest sample inside Group.
All samples from the array Peaks that are inside Group are marked as analysed.
Then, the next highest unanalysed sample is taken from the array Peaks as the centre of a new candidate grinding event CGE.
The neural network can only process fixed-length candidate grinding events. In order to provide the network with such events, an event is created around the maximum sample in the group. The event size may e.g. be 8192 samples. The left border of the event is taken 4096 samples left from the maximum sample and the right border is taken 4096 samples right from the maximum sample. This event is fed to the neural network. Should the event be classified as a grinding event, all samples of the group are classified as grinding despite that only a segment of the group was input to the neural network.
In FIG. 5, the identified groups from processing step S112 are indicated by the boxes surrounding the identified Wien filter results W.
In step S114, the detection of candidate grinding event CGE involves determining whether a signal meets a CGE noise threshold. The detection may use the ratio of the highest energy value or magnitude value of a candidate grinding event CGE to the base level from the histogram and compare this to a CGE noise threshold. If the ratio is below the CGE noise threshold, the CGE is not processed further (i.e. the CGE is considered silent) and the next candidate grinding event CGE is taken, step S116. The fourth candidate grinding event in the upper diagram of FIG. 5 may, for example, have a CGE that does not meet the noise threshold. FIG. 4C provides an embodiment for a more specific scheme for steps S110-S116.
If the CGE ratio is above the CGE noise threshold, the features of the candidate grinding event are determined in order to classify the candidate grinding event CGE as either a grinding event G or a non-grinding event NG. Examples of such features are described above. In step S118, an event feature vector V(CGE) is generated, comprising one or more of the features described above. Said features were determined to contain relevant information for classifying events as grinding or nongrinding. In step S120, the event feature vector is processed in the neural network. The neural network is trained with event feature vectors comprising features of known grinding events and non-grinding events.
As shown for the embodiment of FIG. 4, the neural network provides two outputs for each candidate grinding event CGE: a first number indicating grinding neuron activation GA and a second number indicating non-grinding neuron activation NGA.
The double output of the neural network provides more flexibility in the teeth grinding processing. For example, in one embodiment, a relation of the first number and the second number, e.g. the difference GA-NGA of the first number and the second number, to a certain decision threshold determines whether or not a candidate grinding event is a teeth grinding event, step S122. The decision threshold may e.g. be set to a number in the range of 0.5 and 0.9, e.g. 0.8. If the differences between the two numbers is higher than the decision threshold, the candidate grinding event is determined to be a grinding event G; otherwise the candidate grinding event is determined to be a non-grinding event NG. Once the determination has been made, step S124 determines whether a further candidate grinding event CGE should be analysed in the neural network for the current candidate grinding portion CGP. If this is the case, a further vector V of event features is generated in step S118 and fed to the neural network in step S120. If not, step S108 is invoked for processing the next signal portion.
As mentioned above, if an event is classified as a grinding event G, the complete group is classified as a grinding event.
Finally, in step S126, the number of grinding activations is counted per unit of time, e.g. per hour, to determine whether the person P is a bruxer. FIG. 4D provides an embodiment for a more specific scheme for steps S118 - S124. FIG. 6 is a schematic illustration of a sound signal portion of a real sound recording and shows values for the neuron activations to distinguish between grinding events G (the light portions of the signal) and non-grinding events NG (the other events). The dashed boxes indicate groups of events.
Events SIL were not fed to the neural network because these were identified as silent in step S114 of the flow chart of FIG. 4A. The numbers in the boxes above the signal are the first values output from the neural network, i.e. the grinding activation GA; the numbers in the boxes below the signals are the second values output from the neural network, i.e. the non-grinding activation NGA. Both activations are shown if the difference GA-NGA is too low compared to the decision threshold in step S122. In FIG. 6, only one candidate grinding event is classified as a grinding event, viz. where GA=0.97. FIG. 7 is a schematic block diagram of a general system, such as the mobile device 1.
As shown in Fig. 7, the data processing system 70 may include at least one processor 71 coupled to memory elements 72 through a system bus 73. As such, the data processing system may store program code within memory elements 72. Further, the processor 71 may execute the program code accessed from the memory elements 72 via a system bus 73. In one aspect, the data processing system may be implemented as a computer that is suitable for storing and/or executing program code. It should be appreciated, however, that the data processing system 70 may be implemented in the form of any system including a processor and a memory that is capable of performing the functions described within this specification.
The memory elements 72 may include one or more physical memory devices such as, for example, local memory 74 and one or more bulk storage devices 75. The local memory may refer to random access memory or other non-persistent memory device(s) generally used during actual execution of the program code. A bulk storage device may be implemented as a hard drive or other persistent data storage device. The processing system 70 may also include one or more cache memories (not shown) that provide temporary storage of at least some program code in order to reduce the number of times program code must be retrieved from the bulk storage device 75 during execution.
Input/output (I/O) devices depicted as an input device 76 and an output device 77 optionally can be coupled to the data processing system. Examples of input devices may include, but are not limited to, a keyboard, a pointing device such as a mouse, or the like. Examples of output devices may include, but are not limited to, a monitor or a display, speakers, or the like. Input and/or output devices may be coupled to the data processing system either directly or through intervening I/O controllers .
In an embodiment, the input and the output devices may be implemented as a combined input/output device (illustrated in FIG. 7 with a dashed line surrounding the input device 76 and the output device 77). An example of such a combined device is a touch sensitive display, also sometimes referred to as a "touch screen display" or simply "touch screen". In such an embodiment, input to the device may be provided by a movement of a physical object, such as e.g. a stylus or a finger of a user, on or near the touch screen display. A network adapter 78 may also be coupled to the data processing system to enable it to become coupled to other systems, computer systems, remote network devices, and/or remote storage devices through intervening private or public networks. The network adapter may comprise a data receiver for receiving data that is transmitted by said systems, devices and/or networks to the data processing system 70, and a data transmitter for transmitting data from the data processing system 70 to said systems, devices and/or networks. Modems, cable modems, and Ethernet cards are examples of different types of network adapter that may be used with the data processing system 70.
As pictured in FIG. 7, the memory elements 72 may store an application 79, such as the app 2 for detecting teeth grinding as described above in further detail. In various embodiments, the application 79 may be stored in the local memory 74, the one or more bulk storage devices 75, or apart from the local memory and the bulk storage devices. It should be appreciated that the data processing system 70 may further execute an operating system (not shown in FIG. 7) that can facilitate execution of the application 79. The application 79, being implemented in the form of executable program code, can be executed by the data processing system 70, e.g., by the processor 71. Responsive to executing the application, the data processing system 70 may be configured to perform one or more operations or method steps described herein.
In one aspect of the present invention, the data processing system 70 may represent the mobile device as shown in FIG. 3.
Various embodiments of the invention may be implemented as a program product for use with a computer system, where the program(s) of the program product define functions of the embodiments (including the methods described herein). In one embodiment, the program(s) can be contained on a variety of non-transitory computer-readable storage media, where, as used herein, the expression "non-transitory computer readable storage media" comprises all computer-readable media, with the sole exception being a transitory, propagating signal. In another embodiment, the program(s) can be contained on a variety of transitory computer-readable storage media. Illustrative computer-readable storage media include, but are not limited to:
(i) non-writable storage media (e.g., read-only memory devices within a computer such as CD-ROM disks readable by a CD-ROM drive, ROM chips or any type of solid-state non-volatile semiconductor memory) on which information is permanently stored; and (ii) writable storage media (e.g., flash memory, floppy disks within a diskette drive or hard-disk drive or any type of solid-state random-access semiconductor memory) on which alterable information is stored. The computer program may be run on the processor 111 described herein.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising, " when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of embodiments of the present invention has been presented for purposes of illustration, but is not intended to be exhaustive or limited to the implementations in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the present invention. The embodiments were chosen and described in order to best explain the principles and some practical applications of the present invention, and to enable others of ordinary skill in the art to understand the present invention for various embodiments with various modifications as are suited to the particular use contemplated.
It is to be understood that any feature described in relation to any one embodiment may be used alone, or in combination with other features described, and may also be used in combination with one or more features of any other of the embodiments, or any combination of any other of the embodiments. Moreover, the invention is not limited to the embodiments described above, which may be varied within the scope of the accompanying claims.

Claims (15)

1. Computerprogrammaproduct voor het detecteren van tandknarsen van een persoon, waarbij het computerprogrammaproduct instructies voor een computer omvat, welke instructies, wanneer deze worden uitgevoerd door een computer, veroorzaken dat de computer een geluidsignaal van de persoon verwerkt, waarbij het verwerken omvat: - het ruisfilteren van betreffende gedeelten van het geluidsignaal om een kandidaatknarsgedeelte van het geluidsignaal vast te stellen; - het detecteren van één of meer kandidaatknarsge-beurtenissen in het kandidaatknarsgedeelte; - het voortbrengen van een gebeurteniseigenschap-vector van eigenschappen van de kandidaatknarsgebeurtenis; - het voeden van de gebeurteniseigenschapvector aan een neuraal netwerk dat is getraind op tandknarsdetectie; - het detecteren van tandknarsen voor de kandidaatknarsgebeurtenis op basis van ten minste één uitvoer van het neurale netwerk.A computer program product for detecting a person's dental grinding, the computer program product comprising instructions for a computer, which instructions, when executed by a computer, cause the computer to process a sound signal from the person, the processing comprising: noise filtering of relevant portions of the audio signal to determine a candidate ground portion of the audio signal; - detecting one or more candidate crunch events in the candidate crunch section; - generating an event property vector of properties of the candidate crunch event; - feeding the event property vector to a neural network that is trained on dental grinding detection; - detecting grinding teeth for the candidate grinding event based on at least one output of the neural network. 2. Computerprogrammaproduct volgens conclusie 1, waarbij het ruisfilteren het berekenen omvat van een ruisco-efficiënt voor het gedeelte en het vergelijken van de ruiscoëfficiënt met een drempelwaarde, waarbij de instructies bovendien slechts het verwerken omvatten van het gedeelte van het geluidsignaal wanneer de ruiscoëfficiënt voldoet aan de drempelwaarde.A computer program product according to claim 1, wherein the noise filtering comprises calculating a noise coefficient for the portion and comparing the noise coefficient with a threshold value, the instructions further comprising only processing the portion of the sound signal when the noise coefficient meets the threshold value. 3. Computerprogrammaproduct volgens conclusie 1 of 2, waarbij het gedeelte van het geluidsignaal een tijdsduur heeft van minder dan vijf minuten, bij voorkeur minder dan twee minuten, bijvoorbeeld één minuut of 30 seconden.A computer program product according to claim 1 or 2, wherein the portion of the audio signal has a duration of less than five minutes, preferably less than two minutes, for example one minute or 30 seconds. 4. Computerprogrammaproduct volgens een of meer voorgaande conclusies, waarbij het detecteren van één of meer kandidaatknarsgedeelten één of meer van de volgende omvat: - het toewijzen van een veelheid pieken van het ge-luidsignaal in het kandidaatknarsgedeelte aan één kandidaatknarsgebeurtenis op basis van de afstand tussen pieken; - het bevestigen van een veelheid pieken van het geluidsignaal als een kandidaatgebeurtenis uitsluitend indien de kandidaatgebeurtenis een ingestelde ruisdrempelwaarde overschrijdt.Computer program product according to one or more of the preceding claims, wherein detecting one or more candidate ground portions comprises one or more of the following: - assigning a plurality of peaks of the sound signal in the candidate ground portion to one candidate ground event based on the distance between spikes; - confirming a plurality of peaks of the sound signal as a candidate event only if the candidate event exceeds a set noise threshold. 5. Computerprogrammaproduct volgens een of meer van de voorgaande conclusies, waarbij de eigenschappen van de ei-genschapvector een veelheid signaalparameters omvat, bijvoorbeeld een of meer van: - signaalenergie - spectraalcentroïde - spectrale vlakheid - mate van nuldoorgang - gemiddelde grootte - spectrale scheefheid - spectrale kurtose - gemiddelde van spectrale flux - standaarddeviatie van spectrale fluxComputer program product according to one or more of the preceding claims, wherein the characteristics of the feature vector comprise a plurality of signal parameters, for example one or more of: - signal energy - spectral centroid - spectral flatness - degree of zero crossing - average size - spectral skew - spectral curtose - average of spectral flux - standard deviation of spectral flux 6. Computerprogrammaproduct volgens een of meer van de voorgaande conclusies, waarbij de eigenschappen van de ei genschapvector een of meer Mei frequentiecepstrumcoëfficiën-ten (MFCC) omvat, en, optioneel: - maximale MFCC - minimale MFCC - gemiddelde MFCC - standaarddeviatie MFCCA computer program product according to any one of the preceding claims, wherein the characteristics of the feature vector include one or more May frequency spectrum coefficients (MFCC), and, optionally: - maximum MFCC - minimum MFCC - average MFCC - standard deviation MFCC 7. Computerprogrammaproduct volgens een of meer van de voorgaande conclusies, waarbij de eigenschappen van de ei-genschapvector maatwerk eigenschappen omvatten, bijvoorbeeld een mate-eigenschap die de relatie beschrijft tussen een luidste deel van de kandidaatgebeurtenis en het stilteniveau van het kandidaatknarsgedeelte.A computer program product according to any one of the preceding claims, wherein the properties of the property vector comprise customization properties, for example a degree property that describes the relationship between a loudest part of the candidate event and the silence level of the candidate crunching part. 8. Computerprogrammaproduct volgens een of meer van de voorgaande conclusies, waarbij het detecteren van tand-knarsen het uitvoeren omvat door het neurale netwerk van niet-genormaliseerde uitvoeren die een eerste getal omvatten dat knarsactivering aanduidt en een tweede getal dat niet-knars activering aanduidt.A computer program product according to any one of the preceding claims, wherein detecting tooth-grinding comprises performing through the neural network of non-normalized outputs comprising a first number indicating crease activation and a second number indicating non-crease activation. 9. Computerprogrammaproduct volgens conclusie 8, waarbij de relatie van het eerste getal en het tweede getal, bij voorbeeld het verschil van het eerste getal en het tweede getal, met een bepaalde beslisdrempelwaarde bepaalt of een kandidaatknarsgebeurtenis al of niet een tandknarsgebeurtenis is.A computer program product according to claim 8, wherein the relationship of the first number and the second number, for example the difference of the first number and the second number, with a determined decision threshold value determines whether or not a candidate crunch event is a dental crunch event. 10. Computerprogrammaproduct volgens een of meer van de voorgaande conclusies, waarbij de instructies verdere invoeren verwerken voor het detecteren van tandknarsen, waarbij de verder invoeren er ten minste één omvatten van: - een invoer die verband houdt met andere slaap-kwaaldetectie-algoritmen, bijvoorbeeld snurken; en - een invoer die verband houdt met een ander signaal voor het detecteren van tandknarsen.A computer program product according to one or more of the preceding claims, wherein the instructions process further inputs for detecting grinding teeth, the further inputs comprising at least one of: an input related to other sleep-disorder detection algorithms, e.g. snoring; and - an input associated with another signal for detecting grinding teeth. 11. Computerprogrammaproduct volgens een of meer voorgaande conclusies, waarbij het computerprogrammaproduct is vervat in een mobiele consumentencomputer, bijvoorbeeld een smart phone, een tablet computer of een laptop computer, welke ten minste één geluidsensor omvat.A computer program product according to one or more of the preceding claims, wherein the computer program product is contained in a mobile consumer computer, for example a smart phone, a tablet computer or a laptop computer, which comprises at least one sound sensor. 12. Computerprogrammaproduct volgens conclusie 11, waarbij de mobiele consumentencomputer een beeldscherm omvat en de instructies instructies omvatten voor het vertonen van informatie met betrekking tot de detectie van tandknarsen op het beeldscherm.The computer program product of claim 11, wherein the mobile consumer computer comprises a display and the instructions include instructions for displaying information relating to the detection of grinding teeth on the display. 13. Op een computer geïmplementeerde werkwijze omvattende het uitvoeren van de instructies van het computerprogrammaproduct volgens een of meer van de voorgaande conclusies 1 tot en met 12.A computer-implemented method comprising executing the instructions of the computer program product according to one or more of the preceding claims 1 to 12. 14. Mobiele consumentencomputer omvattende het computerprogrammaproduct volgens één of meer van de voorgaande conclusies en ten minste één verwerkingseenheid ingericht voor het uitvoeren van de in het computerprogrammaproduct opgeslagen instructies.A mobile consumer computer comprising the computer program product according to one or more of the preceding claims and at least one processing unit adapted to execute the instructions stored in the computer program product. 15. Mobiele consumentencomputer volgens conclusies 14, waarbij het computerprogrammaproduct het neurale netwerk omvat dat is getraind voor detectie van tandknarsen.A mobile consumer computer according to claim 14, wherein the computer program product comprises the neural network trained for detection of grinding teeth.
NL2016471A 2016-03-22 2016-03-22 Teeth grinding detection. NL2016471B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
NL2016471A NL2016471B1 (en) 2016-03-22 2016-03-22 Teeth grinding detection.

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
NL2016471A NL2016471B1 (en) 2016-03-22 2016-03-22 Teeth grinding detection.

Publications (1)

Publication Number Publication Date
NL2016471B1 true NL2016471B1 (en) 2017-10-05

Family

ID=60185859

Family Applications (1)

Application Number Title Priority Date Filing Date
NL2016471A NL2016471B1 (en) 2016-03-22 2016-03-22 Teeth grinding detection.

Country Status (1)

Country Link
NL (1) NL2016471B1 (en)

Similar Documents

Publication Publication Date Title
US11564623B2 (en) Food intake monitor
US11810670B2 (en) Intelligent health monitoring
Rahman et al. BodyBeat: a mobile system for sensing non-speech body sounds.
Zhang et al. Pdvocal: Towards privacy-preserving parkinson's disease detection using non-speech body sounds
US20160302003A1 (en) Sensing non-speech body sounds
CN111712183A (en) In-ear non-verbal audio event classification system and method
Passler et al. Food intake activity detection using a wearable microphone system
EP3355796A1 (en) Ultrasound apparatus and method for determining a medical condition of a subject
US10959661B2 (en) Quantification of bulbar function
JP5418666B2 (en) Bruxism detection apparatus and computer program for bruxism detection
Nguyen et al. SwallowNet: Recurrent neural network detects and characterizes eating patterns
Castellana et al. Cepstral Peak Prominence Smoothed distribution as discriminator of vocal health in sustained vowel
Peruzzi et al. A novel methodology to remotely and early diagnose sleep bruxism by leveraging on audio signals and embedded machine learning
Abujrida et al. Smartphone-based gait assessment to infer Parkinson's disease severity using crowdsourced data
JP6914525B2 (en) Swallowing function analysis system and program
NL2016471B1 (en) Teeth grinding detection.
AU2021290457A1 (en) Systems and methods for screening obstructive sleep apnea during wakefulness using anthropometric information and tracheal breathing sounds
Eun et al. Development of personalized urination recognition technology using smart bands
US20220313153A1 (en) Diagnosis and monitoring of bruxism using earbud motion sensors
JP6685891B2 (en) Biosignal processing apparatus, program and method using filter processing and frequency determination processing
Castillo-Escario et al. Convolutional neural networks for Apnea detection from smartphone audio signals: effect of window size
Kalantarian et al. A smartwatch-based system for audio-based monitoring of dietary habits
Rodriguez et al. Waah: Infants cry classification of physiological state based on audio features
JP2017012249A (en) Meal time estimation method, meal time estimation program and meal time estimation device
US20230206745A1 (en) System for Reminding a User to Brush the User's Teeth