EP2715722B1 - Preserving audio data collection privacy in mobile devices - Google Patents
Preserving audio data collection privacy in mobile devices Download PDFInfo
- Publication number
- EP2715722B1 EP2715722B1 EP12724453.1A EP12724453A EP2715722B1 EP 2715722 B1 EP2715722 B1 EP 2715722B1 EP 12724453 A EP12724453 A EP 12724453A EP 2715722 B1 EP2715722 B1 EP 2715722B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- audio
- audio data
- subset
- continuous
- stream
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000013480 data collection Methods 0.000 title description 3
- 238000000034 method Methods 0.000 claims description 63
- 238000004458 analytical method Methods 0.000 claims description 23
- 238000012545 processing Methods 0.000 claims description 21
- 230000002123 temporal effect Effects 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 description 22
- 238000010586 diagram Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 4
- 230000007613 environmental effect Effects 0.000 description 4
- 239000000203 mixture Substances 0.000 description 4
- 238000005070 sampling Methods 0.000 description 4
- 238000012800 visualization Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 230000015556 catabolic process Effects 0.000 description 2
- 238000006731 degradation reaction Methods 0.000 description 2
- VJYFKVYYMZPMAB-UHFFFAOYSA-N ethoprophos Chemical compound CCCSP(=O)(OCC)SCCC VJYFKVYYMZPMAB-UHFFFAOYSA-N 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000002045 lasting effect Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000013179 statistical model Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/78—Detection of presence or absence of voice signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W12/00—Security arrangements; Authentication; Protecting privacy or anonymity
- H04W12/02—Protecting privacy or anonymity, e.g. protecting personally identifiable information [PII]
Definitions
- Mobile devices are incredibly widespread in today's society. For example, people use cellular phones, smart phones, personal digital assistants, laptop computers, pagers, tablet computers, etc. to send and receive data wirelessly from countless locations. Moreover, advancements in wireless communication technology have greatly increased the versatility of today's mobile devices, enabling users to perform a wide range of tasks from a single, portable device that conventionally required either multiple devices or larger, non-portable equipment.
- mobile devices can be configured to determine what environment (e.g., restaurant, car, park, airport, etc.) a mobile device user may be in through a process called context determination.
- Context awareness applications that perform such context determinations seek to determine the environment of a mobile device by utilizing information from the mobile device's sensor inputs, such as GPS, WiFi and BlueTooth®.
- classifying audio from the mobile device's microphone is highly valuable in making context determinations, but the process of collecting audio that may include speech can raise privacy issues.
- Techniques disclosed herein provide for using the hardware and/or software of a mobile device to obscure speech in the audio data before a context determination is made by a context awareness application using the audio data.
- a subset of a continuous audio stream is captured such that speech (words, phrases and sentences) cannot be reliably reconstructed from the gathered audio.
- the subset is analyzed for audio characteristics, and a determination can be made regarding the ambient environment.
- a method of privacy-sensitive audio analysis may include capturing a subset of audio data contained in a continuous audio stream.
- the continuous audio stream may contain human speech.
- the subset of audio data may obscure content of the human speech.
- the method may include analyzing the subset of audio data for audio characteristics.
- the method may include making a determination of an ambient environment, based, at least in part, on the audio characteristics.
- Embodiments of such a method may include one or more of the following:
- the subset of audio data may comprise a computed function of the continuous audio stream having a lesser number of bits than is needed to reproduce the continuous audio stream with intelligible fidelity.
- the subset of audio data may comprise a plurality of audio data segments, each audio data segment comprising data from a different temporal component of the continuous audio stream.
- the method may include making a determination of an identity of a person based, at least in part, on the audio characteristics.
- the plurality of audio data segments may comprise between 30ms to 100ms of recorded audio.
- Each temporal component of the continuous audio stream may be between 250ms to 2s in length.
- the method may include randomly altering an order of the plurality of audio data segments before analyzing the subset of audio data. Randomly altering the order of the plurality of audio data segments may be based, at least in part, on information from one of: a Global Positioning System (GPS) device, signal noise from circuitry within a mobile device, signal noise from a microphone, and signal noise from an antenna.
- GPS Global Positioning System
- a device for obscuring privacy-sensitive audio may include a microphone.
- the device may include a processing unit communicatively coupled to the microphone.
- the processing unit may be configured to capture a subset of audio data contained in a continuous audio stream represented in a signal from the microphone.
- the continuous audio stream may contain human speech.
- the subset of audio data may obscure content of the human speech.
- the processing unit may be configured to analyze the subset of audio data for audio characteristics.
- the processing unit may be configured to make a determination of an ambient environment, based, at least in part, on the audio characteristics.
- Embodiments of such a device may include one or more of the following:
- the subset of audio data may comprise a computed function of the continuous audio stream having a lesser number of bits than is needed to reproduce the continuous audio stream with intelligible fidelity.
- the subset of audio data may comprise a plurality of audio data segments, each audio data segment comprising data from a different temporal component of the continuous audio stream.
- the processing unit may be configured to make a determination of an identity of a person based, at least in part, on the audio characteristics.
- Each of the plurality of audio data segments may comprise between 30ms to 100ms of recorded audio.
- Each temporal component of the continuous audio stream may be between 250ms to 2s in length.
- the device wherein the processing unit is further configured to randomly altering an order of the plurality of audio data segments before analyzing the subset of audio data. Randomly altering the order of the plurality of audio data segments may be based, at least in part, on information from one of: a Global Positioning System (GPS) device, signal noise from circuitry within a mobile device, signal noise from the microphone, and signal noise from an antenna.
- GPS Global Positioning System
- a system for determining an environment associated with a mobile device may include an audio sensor configured to receive a continuous audio stream.
- the system may include at least one processing unit coupled to the audio sensor.
- the processing unit may be configured to capture a subset of audio data contained in the continuous audio stream, such that the subset of audio data obscures content of human speech included in the continuous audio stream.
- the processing unit may be configured to analyze the subset of audio data for audio characteristics.
- the processing unit may be configured to make a determination of an ambient environment, based, at least in part, on the audio characteristics.
- Embodiments of such a system may include one or more of the following:
- the system may include a network interface configured to send information representing the subset of audio data via a network to a location remote from the mobile device.
- the at least one processing unit may be configured to make the determination of the ambient environment at the location remote from the mobile device.
- the subset of audio data may comprise a plurality of audio data segments, each audio data segment comprising data from a different temporal component of the continuous audio stream.
- the at least one processing unit may be configured to make a determination of an identity of a person based, at least in part, on the audio characteristics, ach of the plurality of audio data segments may comprise between 30ms to 100ms of recorded audio.
- Each temporal component of the continuous audio stream may be between 250ms to 2s in length.
- the processing unit may be further configured to randomly alter an order of the plurality of audio data segments before analyzing the subset of audio data.
- a computer program product residing on a non-transitory processor-readable medium includes processor-readable instructions configured to cause a processor to capture a subset of audio data contained in a continuous audio stream.
- the continuous audio stream may contains human speech.
- the subset of audio data may obscure content of the human speech.
- the processor-readable instructions may be configured to cause the processor to analyze the subset of audio data for audio characteristics.
- the processor-readable instructions may be configured to cause the processor to make a determination of an ambient environment, based, at least in part, on the audio characteristics.
- Embodiments of such a computer program product may include one or more of the following:
- the subset of audio data may comprise a computed function of the continuous audio stream having a lesser number of bits than is needed to reproduce the continuous audio stream with intelligible fidelity.
- the subset of audio data may comprise a plurality of audio data segments, each audio data segment comprising data from a different temporal component of the continuous audio stream.
- the processor-readable instructions may be configured to cause the processor to make a determination of an identity of a person based, at least in part, on the audio characteristics.
- Each of the plurality of audio data segments may comprise between 30ms to 100ms of recorded audio.
- Each temporal component of the continuous audio stream may be between 250ms to 2s in length.
- the processor-readable instructions may be configured to randomly alter an order of the plurality of audio data segments before analyzing the subset of audio data.
- the processor-readable instructions for randomly altering the order of the plurality of audio data segments is based, at least in part, on information from one of: a Global Positioning System (GPS) device, signal noise from circuitry within a mobile device, signal noise from a microphone, and signal noise from an antenna.
- GPS Global Positioning System
- a device for obscuring privacy-sensitive audio may include means for capturing a subset of audio data contained in a continuous audio stream represented in a signal from a microphone.
- the continuous audio stream may contain human speech.
- the subset of audio data may obscure content of the human speech.
- the device may include means for analyzing the subset of audio data for audio characteristics.
- the device may include means for determining an ambient environment, based, at least in part, on the audio characteristics.
- Embodiments of such a device may include one or more of the following:
- the means for capturing the subset of audio data may be configured to capture the subset of audio data in accordance with a computed function of the continuous audio stream having a lesser number of bits than is needed to reproduce the continuous audio stream with intelligible fidelity.
- the means for capturing the subset of audio data may be configured to capture the subset of audio data such that the subset of audio data comprises a plurality of audio data segments, each audio data segment comprising data from a different temporal component of the continuous audio stream.
- the means for determining the ambient environment may be configured to make a determination of an identity of a person based, at least in part, on the audio characteristics.
- the means for capturing the subset of audio data may be configured to capture the subset of audio data such that each of the plurality of audio data segments comprises between 30ms to 100ms of recorded audio.
- Items and/or techniques described herein may provide one or more of the following capabilities, as well as other capabilities not mentioned.
- Obscuring of the content of speech that may be included in an audio stream used for a context determination while having little or no impact on the accuracy of the context determination. Utilizing a relatively simple method that can be executed in real time, using minimal processing resources. Including an ability to upload a subset of audio data (having obscured speech) to help improve the accuracy of models used in context determinations. While at least one item/technique-effect pair has been described, it may be possible for a noted effect to be achieved by means other than that noted, and a noted item/technique may not necessarily yield the noted effect.
- Mobile devices such as personal digital assistants (PDAs), mobile phones, tablet computers, and other personal electronics, can be enabled with context awareness applications. These context awareness applications can determine, for example, where a user of the mobile device is and what the user might be doing, among other things. Such context determinations can help enable a mobile device to provide additional functionality to a user, such as enter a car mode after determining the user is in a car, or entering a silent mode when determining the user has entered a movie theater.
- PDAs personal digital assistants
- context awareness applications can determine, for example, where a user of the mobile device is and what the user might be doing, among other things.
- Such context determinations can help enable a mobile device to provide additional functionality to a user, such as enter a car mode after determining the user is in a car, or entering a silent mode when determining the user has entered a movie theater.
- a subset of audio data may be captured from a continuous audio stream that may contain speech, whereby the nature of the sampling obscures any speech that might be contained in the continuous audio stream.
- the nature of the sampling also preserves certain audio characteristics of the continuous audio stream such that a context determination-such as a determination regarding a particular ambient environment of a mobile device-suffers little or no reduction in accuracy.
- FIG. 1 is a is a simplified block diagram illustrating certain components of a mobile device 100 that can provide for context awareness, according to one embodiment.
- This diagram is an example and is not limiting.
- the mobile device 100 may include additional components (e.g., user interface, antennas, display, etc.) omitted from FIG. 1 for simplicity. Additionally, the components shown may be combined, separated, or omitted, depending on the functionality of the mobile device 100.
- the mobile device 100 includes a mobile network interface 120.
- a mobile network interface 120 can include hardware, software, and/or firmware for communicating with a mobile carrier.
- the mobile network interface 120 can utilize High Speed Packet Access (HSPA), Enhanced HSPA (HSPA+), 3GPP Long Term Evolution (LTE), and/or other standards for mobile communication.
- HSPA High Speed Packet Access
- HSPA+ Enhanced HSPA
- LTE 3GPP Long Term Evolution
- the mobile network interface 120 can also provide certain information, such as location data, that can be useful in context awareness applications.
- the mobile device 100 can include other wireless interface(s) 170.
- Such interfaces can include IEEE 802.11 (WiFi), Bluetooth®, and/or other wireless technologies.
- These wireless interface(s) 170 can provide information to the mobile device 100 that may be used in a context determination.
- the wireless interface(s) 170 can provide information regarding location by determining the approximate location of a wireless network to which one or more of the wireless interface(s) 170 are connected.
- the wireless interface(s) 170 can enable the mobile device 100 to communicate with other devices, such as wireless headsets and/or microphones, which may provide information useful in determining a context of the mobile device 100.
- the mobile device 100 also can include a global positioning system (GPS) unit 160, accelerometer(s) 130, and/or other sensor(s) 150. These additional features can provide information such as location, orientation, movement, temperature, proximity, etc. As with the wireless interface(s) 170, information from these components can help context awareness applications make a context determination regarding the context of the mobile device 100.
- GPS global positioning system
- the mobile device 100 additionally can include an analysis/determination module(s) 110.
- the analysis/determination module(s) 110 can receive sensor information from the various components to which it is communicatively coupled.
- the analysis/determination module(s) 110 also can execute software (including context awareness applications) stored on a memory 180, which can be separate from and/or integrated into the analysis/determination module(s) 110.
- the analysis/determination module(s) 110 can comprise one or many processing devices, including a central processing unit (CPU), microprocessor, digital signal processor (DSP), and/or components that, among other things, have the means capable of analyzing audio data and making a determination based on the analysis.
- CPU central processing unit
- DSP digital signal processor
- wireless interfaces 170 can greatly assist in determining location when the user is outdoors, near identifiable WiFi or BlueTooth access points, walking, etc.
- these components have their limitations. In many scenarios they are less useful for determining environment and situation. For example, information from these components is less useful in distinguishing whether a user is in a meeting or in their office, or whether a user is in a grocery store or the gymnasium immediately next to it.
- information from the audio capturing module 140 can provide highly valuable audio data that can be used to help classify the environment, as well as determine whether there is speech present, whether there are multiple speakers present, the identity of a speaker, etc.
- the process of capturing audio data by a mobile device 100 for a context determination can include temporarily and/or permanently storing audio data to the phone's memory 180.
- the capture of audio data that includes intelligible speech can raise privacy issues. In fact, federal, state, and/or local laws may be implicated if the mobile device 100 captures speech from a user of the mobile device 100, or another person, without consent. These issues can be mitigated by using the hardware and/or software of the mobile device 100 to pre-process the audio data before it is captured such that speech (words, phrases and sentences) cannot be reliably reconstructed from the captured audio data. Moreover, the pre-processing can still allow determination of an ambient environment (e.g., from background noise) and/or other audio characteristics of the audio data, such as the presence of speech, music, typing sounds, etc.
- an ambient environment e.g., from background noise
- FIG. 2a is a visualization of a process for capturing sufficient audio information to classify a mobile device and/or user's situation/environment without performance degradation. Additionally the process can also help ensure that speech (words, phrases and sentences) cannot be reliably reconstructed from the captured information.
- This process involves reducing the dimensionality of an input audio stream. In other words, the bits (i.e., digital data) of an input stream of continuous audio are reduced such that the resultant audio stream has a lesser number of bits than is needed to reproduce the continuous audio stream with intelligible fidelity. Reducing the dimensionality therefore can be a computed function designed to ensure speech is irreproducible.
- a continuous audio stream can comprise a window 210 of audio data lasting T window seconds.
- the window 210 can be viewed as having a plurality of audio data segments. More specifically, the window 210 can comprise N temporal components, or blocks 220, where each block 220 lasts T block seconds and comprises a plurality of frames 230 of T frame seconds each.
- a microphone signal can be sampled such that only one frame 230 (with T frame seconds of data) is collected in every block of T block seconds.
- T frame can range from less than 30ms to 100ms or more
- T block can range from less than 250ms up to 2000ms (2s) or more
- T window can be as short as a single block (e.g., one block per window), up to one minute or more.
- Different frame, block, and window lengths can impact the number of frames 230 per block 220 and the number of blocks 220 per window 210.
- the capturing of frames 230 can be achieved in different ways.
- the analysis/determination module(s) 110 can continuously sample the microphone signal during a window 210 of continuous audio, discarding (i.e., not storing) the unwanted frames 230.
- the processing unit can simply discard 450ms out of every 500ms sampled.
- the analysis/determination module(s) 110 can turn the audio capturing module 140 off during the unwanted frames 230 (e.g., turning the audio capturing module 140 off for 450ms out of every 500ms), thereby collecting only the frames 230 that will be inserted into the resulting audio information 240-a used in a context determination.
- the resulting audio information 240-a is a collection of frames 230 that comprises only a subset of the continuous audio stream in the window 210. Even so, this resulting audio information 240-a can include audio characteristics that can help enable a context determination, such as determining an ambient environment, with no significant impact on in the accuracy of the determination. Accordingly, the resulting audio information 240-a can be provided in real time to an application for context classification, and/or stored as one or more waveform(s) in memory 180 for later analysis and/or uploading to a server communicatively coupled to the mobile device 100.
- FIGS. 2b and 2c are visualizations of processes for capturing audio information, similar to the process shown in FIG. 2a . In FIGS. 2b and 2c , however, additional steps are taken to help ensure further privacy of any speech that may be captured.
- a visualization is provided illustrating how, for every window 210 of T window seconds, the first frames 230 of each block 220 can be captured.
- the resultant audio information 240-b is similar to the resulting audio information 240-a of FIG. 2a , with the additional feature that the frames from which the resultant audio information 240-b is comprised are randomized, thereby further decreasing the likelihood that any speech that may be included in the resultant audio information 240-b could be reproduced with intelligible fidelity.
- FIG. 2c illustrates a process similar to the one shown in FIG. 2b , but further randomizing the frame 230 captured for each block 220. More specifically, rather than capture the first frame 230 of each block 220 of a window 210 as shown in FIGS. 2a and 2b , the process shown in FIG. 2c demonstrates that a random frame 230 from each block 220 can be selected instead.
- the randomization of both the capturing of frames 230 of a window 210 and the ordering of frames 230 in the resultant audio information 240-c helps further ensure that any speech contained in a continuous audio stream within a window 210 is obscured and irreproducible.
- the randomization used in processes shown in FIGS. 2b and 2c can be computed using a seed that is generated in numerous ways.
- the seed may be based on GPS time provided by the GPS unit 160, noise from circuitry within the mobile device 100, noise (or other signal) from the audio capturing module 140, noise from an antenna, etc.
- the permutation can be discarded (e.g., not stored) to help ensure that the shuffling effect cannot be reversed.
- FIGS. 2a , 2b , and 2c are provided as examples and are not limiting. Other embodiments are contemplated.
- the blocks 220 may be randomly permutated before frames 230 are captured.
- frames 230 can be captured randomly throughout the entire window 210, rather than capturing one frame 230 per block 220.
- FIG. 3a is a flow diagram illustrating an embodiment of a method 300-1 for providing the functionality shown in FIG. 2b and 2c .
- the method 300-1 can begin at stage 310, where a block 220 of audio data from a continuous audio stream is received.
- the continuous audio stream can be, for example, audio within a window 210 of time to which the audio capturing module 140 of a mobile device 100 is exposed.
- a frame 230 of the block 220 of audio data is captured.
- the frame 230 can be a predetermined frame (e.g. first frame) of each block 220 of audio data, or it can be randomly selected.
- the frame 230 is captured, for example, by being stored (either temporarily or permanently) in the memory 180 of a mobile device 100.
- the capturing of a frame 230 can include turning a audio capturing module 140 on and off and/or sampling certain portions of a signal from a audio capturing module 140 representing a continuous audio stream.
- stage 340 the process moves to stage 340, where the order of the captured frames are randomized.
- These randomized frames can be stored, for example, in an audio file used for analysis by a context awareness application.
- stage 350 a determination of the ambient environment (or other context determination) is made, based, at least in part, on audio characteristics of the randomized frames.
- stages of the method 300-1 may be performed by one or more different components of the mobile device 100 and/or other systems communicatively coupled with the mobile device 100.
- stages can be performed by any combination of hardware, software, and/or firmware.
- stages can be performed by hardware (such as the analysis/determination module(s) 110), randomizing captured frames, for instance, on a buffer before storing them on the memory 180 and/or providing them to a software application.
- some embodiments may enable certain parameters (e.g., T window , T block , and/or T frame ) to be at least partially configurable by software.
- a mobile device 100 may upload the resultant audio information 240 including the captured frames to a remote server.
- the remote server can make the determination of the ambient environment of stage 350.
- the mobile device 100 can upload the resultant audio information 240 along with a determination of the ambient environment made by the mobile device 100.
- the remote server can use the determination and the resultant audio information 240 to modify existing models used to make ambient environment determinations. This enables the server to maintain models that are able to "learn" from input received by mobile devices 100. Modified and/or updated models then can be downloaded to mobile devices 100 to help improve the accuracy of ambient environment determinations made by the mobile devices 100. Thus, ambient environment determinations (or other contextual determinations) can be continually improved.
- the techniques described herein can allow determination of not only an ambient environment and/or other contextual determinations, but other audio characteristics of the audio data as well. These audio characteristics can include the presence of speech, music, typing sounds, and more. Depending on the audio characteristics include, different determinations may be made.
- FIG. 3b a flow diagram illustrating an example of a method 300-1, which includes stages similar to the method 300-1 of FIG. 3 .
- the method 300-2 of FIG. 3b includes an additional stage 360 where a determination is made regarding the identity of speaker(s) whose speech is included in the captured frames used to made a determination of an ambient environment.
- the determination of stage 360 can be made by the mobile device 100 and/or a remote server to which the captured frames are uploaded.
- the determination regarding identity can include the use of other information and/or models, such as models to help determine the age, gender, etc. of the speaker and, stored information regarding audio characteristics of a particular person's speech, and other data.
- classifiers e.g., probabilistic classifiers used in context awareness applications
- the data used was a commercially acquired audio data set of environmental sounds of a set of environments (e.g., in a park, on a street, in a market, in a car, in an airport, etc.) common among context awareness applications.
- T frame 50ms
- Table 1 indicates how reducing the dimensionality of the audio data by sampling only subsets of a continuous audio stream can have little impact on the accuracy of the classifier's determination of an ambient environment until T block approaches 2 seconds (i.e., the microphone is on for only 50ms for every 2 seconds, or 2.5% of the time). Results may be different for different classifiers.
- configurations may be described as a process which is depicted as a flow diagram or block diagram. Although each may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be rearranged. A process may have additional steps not included in the figure.
- Computer programs incorporating various features of the present invention may be encoded on various non-transitory computer-readable and/or non-transitory processor-readable storage media; suitable media include magnetic media, optical media, flash memory, and other non-transitory media.
- Non-transitory processor-readable storage media encoded with the program code may be packaged with a compatible device or provided separately from other devices.
- program code may be encoded and transmitted via wired optical, and/or wireless networks conforming to a variety of protocols, including the Internet, thereby allowing distribution, e.g., via Internet download.
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Quality & Reliability (AREA)
- Telephone Function (AREA)
- Soundproofing, Sound Blocking, And Sound Damping (AREA)
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201161488927P | 2011-05-23 | 2011-05-23 | |
US13/213,294 US8700406B2 (en) | 2011-05-23 | 2011-08-19 | Preserving audio data collection privacy in mobile devices |
PCT/US2012/037783 WO2012162009A1 (en) | 2011-05-23 | 2012-05-14 | Preserving audio data collection privacy in mobile devices |
Publications (2)
Publication Number | Publication Date |
---|---|
EP2715722A1 EP2715722A1 (en) | 2014-04-09 |
EP2715722B1 true EP2715722B1 (en) | 2018-06-13 |
Family
ID=46178795
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP12724453.1A Active EP2715722B1 (en) | 2011-05-23 | 2012-05-14 | Preserving audio data collection privacy in mobile devices |
Country Status (6)
Country | Link |
---|---|
US (2) | US8700406B2 (ko) |
EP (1) | EP2715722B1 (ko) |
JP (1) | JP5937202B2 (ko) |
KR (1) | KR101580510B1 (ko) |
CN (1) | CN103620680B (ko) |
WO (1) | WO2012162009A1 (ko) |
Families Citing this family (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130090926A1 (en) * | 2011-09-16 | 2013-04-11 | Qualcomm Incorporated | Mobile device context information using speech detection |
ES2767097T3 (es) * | 2011-09-30 | 2020-06-16 | Orange | Método, aparatos y aplicaciones para los atributos de oscurecimiento contextual de un perfil de usuario |
US8925037B2 (en) * | 2013-01-02 | 2014-12-30 | Symantec Corporation | Systems and methods for enforcing data-loss-prevention policies using mobile sensors |
US9300266B2 (en) | 2013-02-12 | 2016-03-29 | Qualcomm Incorporated | Speaker equalization for mobile devices |
US9076459B2 (en) * | 2013-03-12 | 2015-07-07 | Intermec Ip, Corp. | Apparatus and method to classify sound to detect speech |
KR102149266B1 (ko) * | 2013-05-21 | 2020-08-28 | 삼성전자 주식회사 | 전자 기기의 오디오 데이터의 관리 방법 및 장치 |
US9305317B2 (en) | 2013-10-24 | 2016-04-05 | Tourmaline Labs, Inc. | Systems and methods for collecting and transmitting telematics data from a mobile device |
US10057764B2 (en) * | 2014-01-18 | 2018-08-21 | Microsoft Technology Licensing, Llc | Privacy preserving sensor apparatus |
JP6215129B2 (ja) * | 2014-04-25 | 2017-10-18 | 京セラ株式会社 | 携帯電子機器、制御方法及び制御プログラム |
US10404697B1 (en) | 2015-12-28 | 2019-09-03 | Symantec Corporation | Systems and methods for using vehicles as information sources for knowledge-based authentication |
US10326733B2 (en) | 2015-12-30 | 2019-06-18 | Symantec Corporation | Systems and methods for facilitating single sign-on for multiple devices |
US10116513B1 (en) | 2016-02-10 | 2018-10-30 | Symantec Corporation | Systems and methods for managing smart building systems |
US10375114B1 (en) | 2016-06-27 | 2019-08-06 | Symantec Corporation | Systems and methods for enforcing access-control policies |
US10462184B1 (en) | 2016-06-28 | 2019-10-29 | Symantec Corporation | Systems and methods for enforcing access-control policies in an arbitrary physical space |
US10469457B1 (en) | 2016-09-26 | 2019-11-05 | Symantec Corporation | Systems and methods for securely sharing cloud-service credentials within a network of computing devices |
US10812981B1 (en) | 2017-03-22 | 2020-10-20 | NortonLifeLock, Inc. | Systems and methods for certifying geolocation coordinates of computing devices |
US10540521B2 (en) | 2017-08-24 | 2020-01-21 | International Business Machines Corporation | Selective enforcement of privacy and confidentiality for optimization of voice applications |
GB2567703B (en) * | 2017-10-20 | 2022-07-13 | Cirrus Logic Int Semiconductor Ltd | Secure voice biometric authentication |
DE102019108178B3 (de) * | 2019-03-29 | 2020-06-18 | Tribe Technologies Gmbh | Verfahren und Vorrichtung zur automatischen Überwachung von Telefonaten |
US11354085B2 (en) | 2019-07-03 | 2022-06-07 | Qualcomm Incorporated | Privacy zoning and authorization for audio rendering |
US11580213B2 (en) * | 2019-07-03 | 2023-02-14 | Qualcomm Incorporated | Password-based authorization for audio rendering |
WO2021107218A1 (ko) * | 2019-11-29 | 2021-06-03 | 주식회사 공훈 | 음성 데이터의 프라이버시 보호를 위한 방법 및 디바이스 |
WO2021157862A1 (en) * | 2020-02-06 | 2021-08-12 | Samsung Electronics Co., Ltd. | Electronic device and control method thereof |
Family Cites Families (65)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4221931A (en) * | 1977-10-17 | 1980-09-09 | Harris Corporation | Time division multiplied speech scrambler |
JPS59111441A (ja) * | 1982-12-17 | 1984-06-27 | Sony Corp | 音声信号の秘話方式 |
US5267312A (en) * | 1990-08-06 | 1993-11-30 | Nec Home Electronics, Ltd. | Audio signal cryptographic system |
JP2655046B2 (ja) * | 1993-09-13 | 1997-09-17 | 日本電気株式会社 | ベクトル量子化装置 |
WO1997027578A1 (en) * | 1996-01-26 | 1997-07-31 | Motorola Inc. | Very low bit rate time domain speech analyzer for voice messaging |
US7930546B2 (en) * | 1996-05-16 | 2011-04-19 | Digimarc Corporation | Methods, systems, and sub-combinations useful in media identification |
US6078666A (en) * | 1996-10-25 | 2000-06-20 | Matsushita Electric Industrial Co., Ltd. | Audio signal processing method and related device with block order switching |
US7809138B2 (en) * | 1999-03-16 | 2010-10-05 | Intertrust Technologies Corporation | Methods and apparatus for persistent control and protection of content |
US6119086A (en) * | 1998-04-28 | 2000-09-12 | International Business Machines Corporation | Speech coding via speech recognition and synthesis based on pre-enrolled phonetic tokens |
JP3180762B2 (ja) * | 1998-05-11 | 2001-06-25 | 日本電気株式会社 | 音声符号化装置及び音声復号化装置 |
US7457415B2 (en) * | 1998-08-20 | 2008-11-25 | Akikaze Technologies, Llc | Secure information distribution system utilizing information segment scrambling |
US7263489B2 (en) * | 1998-12-01 | 2007-08-28 | Nuance Communications, Inc. | Detection of characteristics of human-machine interactions for dialog customization and analysis |
US6937730B1 (en) * | 2000-02-16 | 2005-08-30 | Intel Corporation | Method and system for providing content-specific conditional access to digital content |
US8677505B2 (en) * | 2000-11-13 | 2014-03-18 | Digital Doors, Inc. | Security system with extraction, reconstruction and secure recovery and storage of data |
US7177808B2 (en) * | 2000-11-29 | 2007-02-13 | The United States Of America As Represented By The Secretary Of The Air Force | Method for improving speaker identification by determining usable speech |
WO2002049363A1 (en) * | 2000-12-15 | 2002-06-20 | Agency For Science, Technology And Research | Method and system of digital watermarking for compressed audio |
US7350228B2 (en) * | 2001-01-23 | 2008-03-25 | Portauthority Technologies Inc. | Method for securing digital content |
JP3946965B2 (ja) * | 2001-04-09 | 2007-07-18 | ソニー株式会社 | 無体財産権を保護する情報を記録する記録装置、記録方法、記録媒体、およびプログラム |
DE10138650A1 (de) * | 2001-08-07 | 2003-02-27 | Fraunhofer Ges Forschung | Verfahren und Vorrichtung zum Verschlüsseln eines diskreten Signals sowie Verfahren und Vorrichtung zur Entschlüsselung |
US7143028B2 (en) * | 2002-07-24 | 2006-11-28 | Applied Minds, Inc. | Method and system for masking speech |
GB2392807A (en) * | 2002-09-06 | 2004-03-10 | Sony Uk Ltd | Processing digital data |
FR2846178B1 (fr) * | 2002-10-21 | 2005-03-11 | Medialive | Desembrouillage adaptatif et progressif de flux audio |
FR2846179B1 (fr) * | 2002-10-21 | 2005-02-04 | Medialive | Embrouillage adaptatif et progressif de flux audio |
JP4206876B2 (ja) * | 2003-09-10 | 2009-01-14 | ヤマハ株式会社 | 遠隔地の様子を伝達する通信装置およびプログラム |
US7564906B2 (en) * | 2004-02-17 | 2009-07-21 | Nokia Siemens Networks Oy | OFDM transceiver structure with time-domain scrambling |
US7720012B1 (en) * | 2004-07-09 | 2010-05-18 | Arrowhead Center, Inc. | Speaker identification in the presence of packet losses |
JP2006238110A (ja) * | 2005-02-25 | 2006-09-07 | Matsushita Electric Ind Co Ltd | 監視システム |
EP1725056B1 (en) * | 2005-05-16 | 2013-01-09 | Sony Ericsson Mobile Communications AB | Method for disabling a mobile device |
US8781967B2 (en) * | 2005-07-07 | 2014-07-15 | Verance Corporation | Watermarking in an encrypted domain |
US8700791B2 (en) * | 2005-10-19 | 2014-04-15 | Immersion Corporation | Synchronization of haptic effect data in a media transport stream |
US8214516B2 (en) * | 2006-01-06 | 2012-07-03 | Google Inc. | Dynamic media serving infrastructure |
CN101467203A (zh) * | 2006-04-24 | 2009-06-24 | 尼禄股份公司 | 先进音频编码装置 |
US8433915B2 (en) * | 2006-06-28 | 2013-04-30 | Intellisist, Inc. | Selective security masking within recorded speech |
US20080243492A1 (en) * | 2006-09-07 | 2008-10-02 | Yamaha Corporation | Voice-scrambling-signal creation method and apparatus, and computer-readable storage medium therefor |
CA2678942C (en) * | 2007-02-20 | 2018-03-06 | Nielsen Media Research, Inc. | Methods and apparatus for characterizing media |
JP4245060B2 (ja) * | 2007-03-22 | 2009-03-25 | ヤマハ株式会社 | サウンドマスキングシステム、マスキングサウンド生成方法およびプログラム |
US8243924B2 (en) * | 2007-06-29 | 2012-08-14 | Google Inc. | Progressive download or streaming of digital media securely through a localized container and communication protocol proxy |
JP4914319B2 (ja) * | 2007-09-18 | 2012-04-11 | 日本電信電話株式会社 | コミュニケーション音声処理方法とその装置、及びそのプログラム |
US8379854B2 (en) * | 2007-10-09 | 2013-02-19 | Alcatel Lucent | Secure wireless communication |
KR101444099B1 (ko) * | 2007-11-13 | 2014-09-26 | 삼성전자주식회사 | 음성 구간 검출 방법 및 장치 |
US8140326B2 (en) * | 2008-06-06 | 2012-03-20 | Fuji Xerox Co., Ltd. | Systems and methods for reducing speech intelligibility while preserving environmental sounds |
CA2731732A1 (en) * | 2008-07-21 | 2010-01-28 | Auraya Pty Ltd | Voice authentication system and methods |
WO2010028301A1 (en) * | 2008-09-06 | 2010-03-11 | GH Innovation, Inc. | Spectrum harmonic/noise sharpness control |
JP5222680B2 (ja) * | 2008-09-26 | 2013-06-26 | セコム株式会社 | 端末利用者監視装置およびシステム |
US8244531B2 (en) * | 2008-09-28 | 2012-08-14 | Avaya Inc. | Method of retaining a media stream without its private audio content |
WO2010047566A2 (en) * | 2008-10-24 | 2010-04-29 | Lg Electronics Inc. | An apparatus for processing an audio signal and method thereof |
EP2605485B1 (en) * | 2008-10-31 | 2017-05-03 | Orange | Communication system incorporating ambient sound pattern detection and method of operation thereof |
KR101829865B1 (ko) * | 2008-11-10 | 2018-02-20 | 구글 엘엘씨 | 멀티센서 음성 검출 |
JP5691191B2 (ja) * | 2009-02-19 | 2015-04-01 | ヤマハ株式会社 | マスキング音生成装置、マスキングシステム、マスキング音生成方法、およびプログラム |
KR101581883B1 (ko) * | 2009-04-30 | 2016-01-11 | 삼성전자주식회사 | 모션 정보를 이용하는 음성 검출 장치 및 방법 |
US8200480B2 (en) * | 2009-09-30 | 2012-06-12 | International Business Machines Corporation | Deriving geographic distribution of physiological or psychological conditions of human speakers while preserving personal privacy |
EP2367169A3 (en) * | 2010-01-26 | 2014-11-26 | Yamaha Corporation | Masker sound generation apparatus and program |
US20110184740A1 (en) * | 2010-01-26 | 2011-07-28 | Google Inc. | Integration of Embedded and Network Speech Recognizers |
US8423351B2 (en) * | 2010-02-19 | 2013-04-16 | Google Inc. | Speech correction for typed input |
US20110216905A1 (en) * | 2010-03-05 | 2011-09-08 | Nexidia Inc. | Channel compression |
US20110218798A1 (en) * | 2010-03-05 | 2011-09-08 | Nexdia Inc. | Obfuscating sensitive content in audio sources |
US8965545B2 (en) * | 2010-09-30 | 2015-02-24 | Google Inc. | Progressive encoding of audio |
US20120136658A1 (en) * | 2010-11-30 | 2012-05-31 | Cox Communications, Inc. | Systems and methods for customizing broadband content based upon passive presence detection of users |
US8938619B2 (en) * | 2010-12-29 | 2015-01-20 | Adobe Systems Incorporated | System and method for decrypting content samples including distinct encryption chains |
US20120203491A1 (en) * | 2011-02-03 | 2012-08-09 | Nokia Corporation | Method and apparatus for providing context-aware control of sensors and sensor data |
US9262612B2 (en) * | 2011-03-21 | 2016-02-16 | Apple Inc. | Device access using voice authentication |
US9407706B2 (en) * | 2011-03-31 | 2016-08-02 | Qualcomm Incorporated | Methods, devices, and apparatuses for activity classification using temporal scaling of time-referenced features |
US20130006633A1 (en) * | 2011-07-01 | 2013-01-03 | Qualcomm Incorporated | Learning speech models for mobile device users |
US9159324B2 (en) * | 2011-07-01 | 2015-10-13 | Qualcomm Incorporated | Identifying people that are proximate to a mobile device user via social graphs, speech models, and user context |
US20130090926A1 (en) * | 2011-09-16 | 2013-04-11 | Qualcomm Incorporated | Mobile device context information using speech detection |
-
2011
- 2011-08-19 US US13/213,294 patent/US8700406B2/en active Active
-
2012
- 2012-05-14 EP EP12724453.1A patent/EP2715722B1/en active Active
- 2012-05-14 KR KR1020137034145A patent/KR101580510B1/ko active IP Right Grant
- 2012-05-14 JP JP2014512870A patent/JP5937202B2/ja active Active
- 2012-05-14 CN CN201280030290.3A patent/CN103620680B/zh active Active
- 2012-05-14 WO PCT/US2012/037783 patent/WO2012162009A1/en unknown
-
2014
- 2014-02-21 US US14/186,730 patent/US20140172424A1/en not_active Abandoned
Non-Patent Citations (1)
Title |
---|
None * |
Also Published As
Publication number | Publication date |
---|---|
US20140172424A1 (en) | 2014-06-19 |
KR101580510B1 (ko) | 2015-12-28 |
JP5937202B2 (ja) | 2016-06-22 |
US20120303360A1 (en) | 2012-11-29 |
CN103620680B (zh) | 2015-12-23 |
WO2012162009A1 (en) | 2012-11-29 |
JP2014517939A (ja) | 2014-07-24 |
EP2715722A1 (en) | 2014-04-09 |
KR20140021681A (ko) | 2014-02-20 |
CN103620680A (zh) | 2014-03-05 |
US8700406B2 (en) | 2014-04-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2715722B1 (en) | Preserving audio data collection privacy in mobile devices | |
WO2020029906A1 (zh) | 一种多人语音的分离方法和装置 | |
EP2994911B1 (en) | Adaptive audio frame processing for keyword detection | |
CN107172256B (zh) | 耳机通话自适应调整方法、装置、移动终端及存储介质 | |
KR102469262B1 (ko) | 오디오 워터 마킹을 이용한 키 구문 검출 | |
US20130006633A1 (en) | Learning speech models for mobile device users | |
CN110298212B (zh) | 模型训练方法、情绪识别方法、表情显示方法及相关设备 | |
US10433256B2 (en) | Application control method and application control device | |
CN107430870A (zh) | 低功率语音命令检测器 | |
WO2013040414A1 (en) | Mobile device context information using speech detection | |
CN104834847A (zh) | 身份验证方法及装置 | |
US11218666B1 (en) | Enhanced audio and video capture and presentation | |
US11626104B2 (en) | User speech profile management | |
CN110875036A (zh) | 语音分类方法、装置、设备及计算机可读存储介质 | |
CN108073572A (zh) | 信息处理方法及其装置、同声翻译系统 | |
US9818427B2 (en) | Automatic self-utterance removal from multimedia files | |
CN106485246B (zh) | 字符识别方法及装置 | |
KR20240100384A (ko) | 신호 부호화/복호화 방법, 장치, 사용자 기기, 네트워크측 기기 및 저장 매체 | |
CN117711420B (zh) | 目标人声提取方法、电子设备及存储介质 | |
KR101595090B1 (ko) | 음성 인식을 이용한 정보 검색 방법 및 장치 | |
CN111787149A (zh) | 一种降噪处理方法、系统和计算机储存介质 | |
CN108073566A (zh) | 分词方法和装置、用于分词的装置 | |
WO2023160515A1 (zh) | 视频处理方法、装置、设备及介质 | |
CN116597828B (zh) | 模型确定方法、模型应用方法和相关装置 | |
CN106776659B (zh) | 基于景点成分识别的检索结果排序方法、装置、用户终端 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20131223 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20170803 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R079 Ref document number: 602012047410 Country of ref document: DE Free format text: PREVIOUS MAIN CLASS: G10L0011020000 Ipc: G10L0025780000 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: G10L 21/0208 20130101ALI20171117BHEP Ipc: G10L 25/78 20130101AFI20171117BHEP Ipc: H04W 12/02 20090101ALI20171117BHEP |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
INTG | Intention to grant announced |
Effective date: 20180102 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE PATENT HAS BEEN GRANTED |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP Ref country code: AT Ref legal event code: REF Ref document number: 1009285 Country of ref document: AT Kind code of ref document: T Effective date: 20180615 |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602012047410 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: MP Effective date: 20180613 |
|
REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG4D |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180913 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180613 Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180613 Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180613 Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180613 Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180613 Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180913 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180613 Ref country code: RS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180613 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180914 Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180613 |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MK05 Ref document number: 1009285 Country of ref document: AT Kind code of ref document: T Effective date: 20180613 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180613 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180613 Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180613 Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180613 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180613 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180613 Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180613 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181013 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SM Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180613 Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180613 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 602012047410 Country of ref document: DE |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
26N | No opposition filed |
Effective date: 20190314 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180613 Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180613 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: AL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180613 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MC Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180613 Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20190531 Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20190531 |
|
REG | Reference to a national code |
Ref country code: BE Ref legal event code: MM Effective date: 20190531 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20190514 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: TR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180613 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20190514 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20190531 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20181015 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: HU Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO Effective date: 20120514 Ref country code: MT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180613 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: FR Payment date: 20210420 Year of fee payment: 10 Ref country code: DE Payment date: 20210413 Year of fee payment: 10 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: GB Payment date: 20210428 Year of fee payment: 10 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20180613 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R119 Ref document number: 602012047410 Country of ref document: DE |
|
GBPC | Gb: european patent ceased through non-payment of renewal fee |
Effective date: 20220514 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: FR Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20220531 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GB Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20220514 Ref country code: DE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20221201 |