WO2017045696A1 - Systèmes, procédés et dispositifs pour la génération de sons de notification - Google Patents

Systèmes, procédés et dispositifs pour la génération de sons de notification Download PDF

Info

Publication number
WO2017045696A1
WO2017045696A1 PCT/EP2015/070940 EP2015070940W WO2017045696A1 WO 2017045696 A1 WO2017045696 A1 WO 2017045696A1 EP 2015070940 W EP2015070940 W EP 2015070940W WO 2017045696 A1 WO2017045696 A1 WO 2017045696A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound
selecting
group
computer
event
Prior art date
Application number
PCT/EP2015/070940
Other languages
English (en)
Inventor
Aaron Day
Original Assignee
Wire Swiss Gmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wire Swiss Gmbh filed Critical Wire Swiss Gmbh
Priority to PCT/EP2015/070940 priority Critical patent/WO2017045696A1/fr
Publication of WO2017045696A1 publication Critical patent/WO2017045696A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M19/00Current supply arrangements for telephone systems
    • H04M19/02Current supply arrangements for telephone systems providing ringing current or supervisory tones, e.g. dialling tone or busy tone
    • H04M19/04Current supply arrangements for telephone systems providing ringing current or supervisory tones, e.g. dialling tone or busy tone the ringing-current being generated at the substations
    • H04M19/041Encoding the ringing signal, i.e. providing distinctive or selective ringing capability
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/18Selecting circuits
    • G10H1/26Selecting circuits for automatically producing a series of tones
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H7/00Instruments in which the tones are synthesised from a data store, e.g. computer organs
    • G10H7/002Instruments in which the tones are synthesised from a data store, e.g. computer organs using a common processing for different operations or calculations, and a set of microinstructions (programme) to control the sequence thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2230/00General physical, ergonomic or hardware implementation of electrophonic musical tools or instruments, e.g. shape or architecture
    • G10H2230/005Device type or category
    • G10H2230/021Mobile ringtone, i.e. generation, transmission, conversion or downloading of ringing tones or other sounds for mobile telephony; Special musical data formats or protocols herefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72442User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality for playing music files

Definitions

  • This invention relates to sound generation, and, more particularly, to automatic generation of notification sounds.
  • Notification sounds e.g., ringing phones, computer alerts, incoming mail alerts, etc.
  • ringing phones e.g., ringing phones, computer alerts, incoming mail alerts, etc.
  • incoming mail alerts e.g., incoming mail alerts, etc.
  • Getting someone's attention with sound can be done in many ways.
  • the use of loud sounds with many harmonics and strong resonant frequencies is an effective method.
  • the use-case is mission critical (e.g., a warning for a pilot about to land that his landing gear haven't been deployed), such methods often serve primarily to annoy the user. This approach can ruin the user experience of high frequency use-cases such as, e.g., incoming email or message alerts, action confirmations, etc.
  • FIG. 1 depicts an overview of a device according to exemplary embodiments hereof;
  • FIG. 2(a) shows aspects of a data structure used by the device of FIG. 1, according to exemplary embodiments hereof;
  • FIG. 2(b) shows exemplary sound groups associated with particular events, according to exemplary embodiments hereof;
  • FIG. 3 shows exemplary processing in the system of FIG. 1, according to exemplary embodiments hereof;
  • FIG. 4 depicts an overview of a device according to exemplary embodiments hereof.
  • FIG. 5 is a schematic diagram of a computer system.
  • AIFF Audio Interchange File Format
  • MIDI means Musical Instrument Digital Interface
  • PCM means pulse coded modulation
  • a “mechanism” refers to any device(s), process(es), routine(s), service(s), or combination thereof.
  • a mechanism may be implemented in hardware, software, firmware, using a special-purpose device, or any combination thereof.
  • a mechanism may be integrated into a single device or it may be distributed over multiple devices. The various components of a mechanism may be co-located or distributed. The mechanism may be formed from other mechanisms.
  • the term “mechanism” may thus be considered to be shorthand for the term device(s) and/or process(es) and/or service(s).
  • the inventor has realized that, given that humans tend to ignore repetition and respond to change, it follows that by changing a sound for an audio notification each time or while it plays we can leverage our capacity to notice change while keeping overall sound pressure levels for a ringtone or other alert low.
  • an "audio notification” refers to any sound or combination of sounds that is/are/may be used to try to notify someone of the occurrence or non-occurrence of an event.
  • the event may be any event and any kind of event, including, e.g., an incoming phone call, arrival of an email, a warning, a confirmation, or the like.
  • the event may be an event caused or generated by the user receiving the notification (e.g., a key press, a camera sound, etc.) or it may be caused by an action taken by another (e.g., an incoming phone call, a text message, etc.).
  • An audio notification may be used alone or in conjunction with other notifications to the person.
  • an audio notification may be combined with a visual notification and/or a vibration notification.
  • An audio notification may be rendered by one or more devices using general or special hardware on the device(s).
  • An audio notification may be stored on one or more devices and/or generated, in whole or in part, on the fly, in substantially real time.
  • a device 100 includes one or more sound rendering devices 102 that can render sounds 104 stored on the device.
  • the term "play" is also used herein to refer to the process of rendering a sound.
  • the device may be any kind of device, including, e.g., a computer device, a mobile phone such as a smart phone or the like, etc.
  • the sounds 104 may comprise one or more sound files. Those of ordinary skill in the art will realize and appreciate, upon reading this description, that format of a sound file will depend, at least in part, on type(s) of sound rendering device(s) 102.
  • a sound file may be or comprise a set of parameters that control a synthesis engine that outputs PCM data, e.g., MIDI data parametrically controlling a software synthesizer.
  • a sound may be considered to be a sound file or parametric control data controlling a software synthesis engine.
  • a sound file preferably uses a known format, e.g., MP3, wav, AIFF or the like.
  • the sounds 104 on a device are preferably organized into one or more sound groups.
  • FIG. 1 shows the sound files organized into M sound groups for some number M (denoted “Sound Group 1 ", “Sound Group 2,” ... “Sound Group M” in the drawing). Each sound group contains one or more sound files. As shown in FIG. 1, Sound Group 1 comprises P sound files, for some number P (denoted Si,i, S li2 .. . S I , P in the drawing). It should be appreciated that the sound groups do not all have to have the same number of sound files and the sound files need not all be of the same format or length.
  • FIG. 2(a) shows an exemplary organization of sounds 104 on a device 100
  • sounds 104 includes M sound groups, with sound group 1 having P sound files (denoted Sound i , Sounds ... Sound p in the drawing), sound group 2 having Q sound files (denoted Sound2,i, Sound2,2 ⁇ Sound2,Q in the drawing) ... and sound group M having N sound files(denoted SoundM , SoundM,2 ... me drawing).
  • Each sound file has a length (L) corresponding to the duration of the actual sound represented by the sound file. There is no requirement that the sound files have the same length, although in some implementations the sound files in some sound groups may have the same length.
  • Sound files may be generated on the fly or made beforehand, e.g. , with commercially available software (such as Ableton Live, Avid Pro-Tools, Apple Logic, etc.) There is no requirement that the sound files on any particular device or in any particular sound group be made or generated in the same way.
  • a presently preferred implementation of the sound rendering device(s) 102 uses playback and modulation of PCM data.
  • the sound rendering device(s) 102 may use MIDI control of a software synthesizer that responds to parametric input.
  • Other variations may combine synthesis and playback of PCM data.
  • the sound groups may correspond to or be associated with events or types of events that may occur on or be associated with the devices.
  • the sound files in a sound group may be used for notifications associated with events or types of events.
  • the device is a telephone
  • one or more sound groups may be associated with ringtones or the like used by the telephone. It should be appreciated that there may be more than one sound group associated with each type of event.
  • Example events for a smartphone or computer device include incoming phone calls, incoming or outgoing text messages, error messages, key presses, powering on/off, etc.
  • FIG. 2(b) shows exemplary sound groups associated with particular events ("new message”, “error”, “confirmation”, “camera”).
  • the sound group “new message” has P sound files
  • the sound group “Error” has Q sound files
  • the sound group “Error” has Q sound files
  • the sounds shown in FIG. 2(b) are provided only by way of example and are not intended to limit the scope hereof in any way.
  • the "new message” sound group is selected (at 304).
  • the device selects a sound from the sound group (at 306), e.g., using a random function.
  • the first sound selected is the sound "new message 2".
  • the device then renders the selected sound (in this case "new message 2") (at 308).
  • the device determines (at 310) whether or not to terminate the sound.
  • the sound may be terminated, e.g., because a user of the device has responded to an event or has taken some other action to cause the sound to terminate.
  • the device may receive an externally generated signal to terminate the sound. If the sound is not to terminate (as determined at 310), then processing continues with the selection of another sound from the sound group (at 306).
  • the sound is not to terminate and that the next selected sound is the sound "new message ' (for some value j in the range 1 to P).
  • the next sound is played or rendered (at 308), and processing continues (at 310) to determine whether or not to terminate. Processing may continue until some event occurs to cause termination. In some cases sound associated with an event may occur a preset duration, a preset number of times, or indefinitely (until stopped by a signal or event occurring).
  • the system may maintain a history of recently played sound files in order to enforce non-repetition of sound files within certain limits. For example, a device may try to avoid repetition of sound files more than every k plays, for some number k. In other examples, the device may require, for certain sound groups, that certain sound files are repeated at least every j plays, for some number j. For example, for a "new message" event, the system may require that "new message 1" be played first and at least once every three plays. Different and/or other rules may be imposed on the sound selection within sound groups.
  • a sound file may be or comprise a sound loop as described in co-owned and copending U.S. patent application no. 62/039,979, filed August 21, 2014, the entire contents of which have been fully incorporated herein by reference for all purposes, and which is incorporated herein as Appendix A.
  • the device 100 includes a sound selector (or sound selector mechanism) 106 that is constructed and adapted and operates to select one or more sound files from sounds 104 to be rendered by sound rendering device(s) 102.
  • the sound selector 106 may be invoked by other mechanisms (not shown) on the device 100 when the device needs to render a sound.
  • the selection of a sound group may depend, at least in part, on the type of event for which the sound is to be played. Accordingly, when a sound is required to be played on the device in connection or association with an event, the sound selector 106 determines the type of the event (at 302). Based at least in part on the event type (determined at 302), the sound selector 106 then selects a sound group (at 304). The sound selector then selects a sound from the selected sound group (at 306) and the selected sound is then played (at 308) by the sound rendering device(s) 102.
  • the selection of a sound from the selected sound group may be based, e.g., on function referred to herein as a selection function.
  • the selection function may be a function that randomly selects a sound file from the list of files in the selected sound group. For example, for the sound group 2 in FIG. 2(A), the selection function may randomly generate a number in the range 1 to Q (where Q is the number of sound files in the sound group).
  • the selection function may use or comprise a pseudorandom number generator that outputs random values over time.
  • the function's distribution may be implemented using a simple function ⁇ e.g. Gaussian) or more complex implementation ⁇ e.g., a discrete-time Markov chain).
  • the selection function may sequentially select the sound files in the selected sound group, it being understood that in these cases the device will preferably maintain a record of the previous sound file played for each sound group.
  • the selection function may make a selection of a sound file from the sound group based on one or more of: information about the user of the device, information about the device ⁇ e.g., the type of device, etc.), external information ⁇ e.g., time of day, temperature, location, etc.).
  • information about the user of the device e.g., the type of device, etc.
  • external information e.g., time of day, temperature, location, etc.
  • the device 100 may need to continue playing a sound after a selected sound has completed playing. This may occur, e.g., when the sound is being played as a notification that requires acknowledgment ⁇ e.g. , a notification of an incoming phone call and the like).
  • the device determines if the sound should terminate (at 310). If the sound should not terminate then the device may select and render another sound from the selected sound group (at 306, 308). This process may be repeated a fixed number of times or until the device indicates that the sound should terminate ⁇ e.g., when the caller hangs up or when the phone is answered).
  • a sound group may be played using some function ⁇ e.g., a random function) that selects the sound elements to play.
  • some function e.g., a random function
  • the selection of sound groups and/or sound elements may be affected or modulated by other factors ⁇ e.g., static or dynamic factors).
  • the sound selector mechanism 106' and/or the sound rendering 102' may be affected or influenced by one or more modulators 108.
  • the selection of a sound (or a sound group) by sound selector 106' may be affected by one or more factors ⁇ e.g., values) provided by modulator(s) 108.
  • the rendering of a selected sound by sound rendering device(s) 102' may be affected by one or more factors (e.g., values) provided by modulator(s) 108.
  • Modulator(s) 108 may be used to select sounds based on static and/or dynamic information, and the information may be determined or derived from information external to the device, information from another device, or any other source.
  • a modulator 108 may provide a value based on one or more of: the time of day, day or week, date, current temperature, current weather, identity of device user, identity of incoming caller, identity of incoming message sender.
  • the concept of modulation uses a so-called “source” and a so-called “target,” where a source may be or comprise any kind of information at any resolution of a static variable (e.g., a given day of the week) or a time varying function (e.g., the temperature between two different times of the day).
  • a source may also comprise a pseudorandom number generator.
  • a source may be mapped to a target using any kind of function, including a linear mapping and an exponential mapping.
  • a source may map to a target in a manner such that any change in the source is associated with a change in the target, where the change may be linear, exponential or any other function.
  • a source may, e.g. , be an evenly weighted random function applied to some parameter within a given sound group (e.g., volume or individual sound; e.g., order of sounds within a group, etc.)
  • a target may be or comprise any parameter of a system, sound group or sound (e.g. , volume, sound group number, sound number, cutoff frequency of a low-pass filter applied to individual sound group output, summed output (a summed mix of each sound group), etc.)
  • sound group or sound e.g. , volume, sound group number, sound number, cutoff frequency of a low-pass filter applied to individual sound group output, summed output (a summed mix of each sound group), etc.
  • a source may affect multiple targets and vice versa.
  • Some random function determines which sound within a sound group
  • Some random function may refer to a pseudorandom number generator that outputs random values over time.
  • the function's distribution may be simple (e.g. Gaussian) or more complex (e.g. a discrete-time Markov chain), and the system is not limited by the function's distribution.
  • Some random function with persistence determines the amplitude of the next sound (target) to play.
  • persistence generally refers to a value or values sampled from the previous system, sound group, or sound's, parametric state e.g. the last value assigned to the amplitude of a given sound.
  • Target where, as used herein "External modulation source” refers to variable or time varying function, besides a pseudorandom number generator that, supplies data that can be applied.
  • Some random function (sourcei) with external modulation source (source 2 ) with persistence (source 3 ) affect some aspect of (targeti)(target 2 )(target 3 ).
  • some or all of the parametric values at a given time from a system, sound group or sound may be applied to the parametric control of values (same or different) of targets of the consecutive (or parallel, thus providing cross-modulation) system, sound group, or sound.
  • a user taps the capacitive touch screen of a device such as a smartphone.
  • a sound associated with the user's tapping is modulated (varied) based on one or more factors such as frequency, velocity, and pressure of the user's tapping.
  • the sound may be rendered on the user's device or on another device (e.g., on a device associated with a different user).
  • a user's interactions with one or more input mechanisms of a device may be treated as a modulation source applied to sounds associated with those interactions.
  • Modulation may thus be applied based on a user's interaction (e.g., direct interaction) with a device (such as the frequency of touches and/or the velocity and/or pressure of those touches, e.g., applied to a capacitive touch screen).
  • a user's interaction e.g., direct interaction
  • a device such as the frequency of touches and/or the velocity and/or pressure of those touches, e.g., applied to a capacitive touch screen.
  • the harder and faster a user "knocks" or "pings" on a device the more the sound changes.
  • the device effectively becomes a near real-time transducer for interactions that may change the way sounds are represented.
  • the user's interaction with a device as a modulation source that can be applied to the sounds themselves.
  • the system may provide for automatic generation of and modulation by random or external real time input such as frequency and or velocity and or pressure of touch events.
  • 106 may be implemented as specialized devices and/or as programs operating on a computer system, as described herein. Programs that implement such methods (as well as other types of data) may be stored and transmitted using a variety of media (e.g. , computer readable media) in a number of manners. Hard- wired circuitry or custom hardware may be used in place of, or in combination with, some or all of the software instructions that can implement the processes of various embodiments. Thus, various combinations of hardware and software may be used instead of software only.
  • FIG. 5 is a schematic diagram of a computer system 500 upon which embodiments of the present disclosure may be implemented and carried out.
  • the computer system 500 includes a bus 502 (i.e., interconnect), one or more processors 504, one or more communications ports 514, a main memory 506, removable storage media 510, read-only memory 508, and a mass storage 512.
  • bus 502 i.e., interconnect
  • processors 504 one or more communications ports 514
  • main memory 506 removable storage media 510
  • read-only memory 508 the computer system 500 includes a main memory 506 (i.e., interconnect), one or more processors 504, one or more communications ports 514, a main memory 506, removable storage media 510, read-only memory 508, and a mass storage 512.
  • Communication port(s) 514 may be connected to one or more networks by way of which the computer system 500 may receive and/or transmit data.
  • a "processor” means one or more microprocessors, central processing units (CPUs), computing devices, microcontrollers, digital signal processors, or like devices or any combination thereof, regardless of their architecture.
  • An apparatus that performs a process can include, e.g. , a processor and those devices such as input devices and output devices that are appropriate to perform the process.
  • Processor(s) 504 can be (or include) any known processor, such as, but not limited to, an Intel® Itanium® or Itanium 2® processor(s), AMD® Opteron® or Athlon MP®
  • Processor(s) may include one or more graphical processing units (GPUs) which may be on graphic cards or stand-alone graphic processors.
  • GPUs graphical processing units
  • Communications port(s) 514 can be any of an RS-232 port for use with a modem based dial-up connection, a 10/100 Ethernet port, a Gigabit port using copper or fiber, or a USB port, and the like. Communications port(s) 514 may be chosen depending on a network such as a Local Area Network (LAN), a Wide Area Network (WAN), a CDN, or any network to which the computer system 500 connects.
  • the computer system 500 may be in communication with peripheral devices (e.g., display screen 516, input device(s) 518) via Input / Output (I/O) port 520. Some or all of the peripheral devices may be integrated into the computer system 500, and the input device(s) 518 may be integrated into the display screen 516 (e.g., in the case of a touch screen).
  • Main memory 506 can be Random Access Memory (RAM), or any other dynamic storage device(s) commonly known in the art.
  • Read-only memory 508 can be any static storage device(s) such as Programmable Read-Only Memory (PROM) chips for storing static information such as instructions for processor(s) 504.
  • Mass storage 512 can be used to store information and instructions. For example, hard disks such as the Adaptec® family of Small Computer Serial Interface (SCSI) drives, an optical disc, an array of disks such as Redundant Array of Independent Disks (RAID), such as the Adaptec® family of RAID drives, or any other mass storage devices may be used.
  • SCSI Small Computer Serial Interface
  • RAID Redundant Array of Independent Disks
  • Bus 502 communicatively couples processor(s) 504 with the other memory, storage and communications blocks.
  • Bus 502 can be a PCI / PCI-X, SCSI, a Universal Serial Bus (USB) based system bus (or other) depending on the storage devices used, and the like.
  • Removable storage media 510 can be any kind of external hard-drives, floppy drives, IOMEGA® Zip Drives, Compact Disc - Read Only Memory (CD-ROM), Compact Disc - Re-Writable (CD-RW), Digital Versatile Disk - Read Only Memory (DVD-ROM), etc.
  • Embodiments herein may be provided as one or more computer program products, which may include a machine-readable medium having stored thereon instructions, which may be used to program a computer (or other electronic devices) to perform a process.
  • machine -readable medium refers to any medium, a plurality of the same, or a combination of different media, which participate in providing data (e.g., instructions, data structures) which may be read by a computer, a processor or a like device.
  • Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media.
  • Non- volatile media include, for example, optical or magnetic disks and other persistent memory.
  • Volatile media include dynamic random access memory, which typically constitutes the main memory of the computer.
  • Transmission media include coaxial cables, copper wire and fiber optics, including the wires that comprise a system bus coupled to the processor. Transmission media may include or convey acoustic waves, light waves and electromagnetic emissions, such as those generated during radio frequency (RF) and infrared (IR) data communications.
  • RF radio frequency
  • IR infrared
  • the machine -readable medium may include, but is not limited to, floppy diskettes, optical discs, CD-ROMs, magneto-optical disks, ROMs, RAMs, erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, flash memory, or other type of media/machine -readable medium suitable for storing electronic instructions.
  • embodiments herein may also be downloaded as a computer program product, wherein the program may be transferred from a remote computer to a requesting computer by way of data signals embodied in a carrier wave or other propagation medium via a communication link (e.g., modem or network connection).
  • data may be (i) delivered from RAM to a processor; (ii) carried over a wireless transmission medium; (iii) formatted and/or transmitted according to numerous formats, standards or protocols; and/or (iv) encrypted in any of a variety of ways well known in the art.
  • a computer-readable medium can store (in any appropriate format) those program elements that are appropriate to perform the methods.
  • main memory 506 is encoded with application(s) 522 that support(s) the functionality as discussed herein (an application 522 may be an application that provides some or all of the functionality of one or more of the mechanisms described herein).
  • Application(s) 522 (and/or other resources as described herein) can be embodied as software code such as data and/or logic instructions (e.g., code stored in the memory or on another computer readable medium such as a disk) that supports processing functionality according to different embodiments described herein.
  • processor(s) 504 accesses main memory 506 via the use of bus 502 in order to launch, run, execute, interpret or otherwise perform the logic instructions of the application(s) 522.
  • Execution of application(s) 522 produces processing functionality of the service(s) or mechanism(s) related to the application(s).
  • the process(es) 524 represents one or more portions of the application(s) 522 performing within or upon the processor(s) 504 in the computer system 500.
  • the application 522 itself (i.e., the un-executed or non-performing logic instructions and/or data).
  • the application 522 may be stored on a computer readable medium (e.g. , a repository) such as a disk or in an optical medium.
  • the application 522 can also be stored in a memory type system such as in firmware, read only memory (ROM), or, as in this example, as executable code within the main memory 506 (e.g., within Random Access Memory or RAM).
  • application 522 may also be stored in removable storage media 510, read-only memory 508, and/or mass storage device 512.
  • the computer system 500 can include other processes and/or software and hardware components, such as an operating system that controls allocation and use of hardware resources.
  • Embodiments herein may be provided as a computer program product, which may include a machine -readable medium having stored thereon instructions, which may be used to program a computer (or other electronic devices) to perform a process.
  • machine -readable medium refers to any medium, a plurality of the same, or a combination of different media, which participate in providing data (e.g. , instructions, data structures) which may be read by a computer, a processor or a like device.
  • Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media.
  • Non- volatile media include, for example, optical or magnetic disks and other persistent memory.
  • Volatile media include dynamic random access memory, which typically constitutes the main memory of the computer.
  • Transmission media include coaxial cables, copper wire and fiber optics, including the wires that comprise a system bus coupled to the processor. Transmission media may include or convey acoustic waves, light waves and electromagnetic emissions, such as those generated during radio frequency (RF) and infrared (IR) data communications.
  • RF radio frequency
  • IR infrared
  • the machine -readable medium may include, but is not limited to, floppy diskettes, optical discs, CD-ROMs, magneto-optical disks, ROMs, RAMs, erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, flash memory, or other type of media/machine -readable medium suitable for storing electronic instructions.
  • embodiments herein may also be downloaded as a computer program product, wherein the program may be transferred from a remote computer to a requesting computer by way of data signals embodied in a carrier wave or other propagation medium via a communication link (e.g., modem or network connection).
  • data may be (i) delivered from RAM to a processor; (ii) carried over a wireless transmission medium; (iii) formatted and/or transmitted according to numerous formats, standards or protocols; and/or (iv) encrypted in any of a variety of ways well known in the art.
  • a computer-readable medium can store (in any appropriate format) those program elements that are appropriate to perform the methods.
  • the computer system 700 can include other processes and/or software and hardware components, such as an operating system that controls allocation and use of hardware resources.
  • embodiments of the present invention include various steps or operations. A variety of these steps may be performed by hardware components or may be embodied in machine-executable instructions, which may be used to cause a general-purpose or special-purpose processor programmed with the instructions to perform the operations.
  • module refers to a self-contained functional component, which can include hardware, software, firmware or any combination thereof.
  • an apparatus may include a computer/computing device operable to perform some (but not necessarily all) of the described process.
  • Embodiments of a computer-readable medium storing a program or data structure include a computer-readable medium storing a program that, when executed, can cause a processor to perform some (but not necessarily all) of the described process.
  • process may operate without any user intervention.
  • process includes some human intervention (e.g., a step is performed by or with the assistance of a human).
  • real time means near real time or sufficiently real time. It should be appreciated that there are inherent delays in network-based and computer
  • real-time data may refer to data obtained in sufficient time to make the data useful for its intended purpose.
  • real time may be used here, it should be appreciated that the system is not limited by this term or by how much time is actually taken to perform any particular process.
  • real time computation may refer to an online computation, i.e., a computation that produces its answer(s) as data arrive, and generally keeps up with continuously arriving data.
  • online computation is compared to an "offline” or "batch” computation.
  • portion means some or all. So, for example,
  • a portion of X may include some of "X” or all of "X”.
  • portion means some or all of the conversation.
  • the phrase “at least some” means “one or more,” and includes the case of only one.
  • the phrase “at least some ABCs” means “one or more ABCs”, and includes the case of only one ABC.
  • the phrase “based on” means “based in part on” or “based, at least in part, on,” and is not exclusive.
  • the phrase “based on factor X” means “based in part on factor X” or “based, at least in part, on factor X.”
  • the phrase “based on X” does not mean “based only on X.”
  • the phrase “using” means “using at least,” and is not exclusive. Thus, e.g., the phrase “using X” means “using at least X.” Unless specifically stated by use of the word “only”, the phrase “using X” does not mean “using only X.”
  • the phrase “distinct” means “at least partially distinct.” Unless specifically stated, distinct does not mean fully distinct. Thus, e.g., the phrase, "X is distinct from Y” means that "X is at least partially distinct from Y,” and does not mean that "X is fully distinct from Y.” Thus, as used herein, including in the claims, the phrase “X is distinct from Y” means that X differs from Y in at least some way.
  • a list may include only one item, and, unless otherwise stated, a list of multiple items need not be ordered in any particular manner.
  • a list may include duplicate items.
  • the phrase "a list of XYZs" may include one or more "XYZs”.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Environmental & Geological Engineering (AREA)
  • General Engineering & Computer Science (AREA)
  • Telephone Function (AREA)

Abstract

La présente invention concerne un procédé informatisé qui comprend la sélection d'un élément sonore parmi un groupe de sons associés à un type d'événement, ledit groupe de sons comprenant une pluralité d'éléments sonores, chaque élément sonore correspondant à un son qui peut être rendu par un dispositif de rendu sonore; et la lecture de l'élément sonore sélectionné. Le procédé est opérationnel pour produire des sonneries ou des alertes sur des dispositifs informatiques, des téléphones mobiles et des smartphones.
PCT/EP2015/070940 2015-09-14 2015-09-14 Systèmes, procédés et dispositifs pour la génération de sons de notification WO2017045696A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2015/070940 WO2017045696A1 (fr) 2015-09-14 2015-09-14 Systèmes, procédés et dispositifs pour la génération de sons de notification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2015/070940 WO2017045696A1 (fr) 2015-09-14 2015-09-14 Systèmes, procédés et dispositifs pour la génération de sons de notification

Publications (1)

Publication Number Publication Date
WO2017045696A1 true WO2017045696A1 (fr) 2017-03-23

Family

ID=54145760

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2015/070940 WO2017045696A1 (fr) 2015-09-14 2015-09-14 Systèmes, procédés et dispositifs pour la génération de sons de notification

Country Status (1)

Country Link
WO (1) WO2017045696A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040204146A1 (en) * 2002-05-23 2004-10-14 Douglas Deeds Programming multiple ringing tones of a terminal
US20050107075A1 (en) * 2003-11-18 2005-05-19 Snyder Thomas D. Shuffle-play for a wireless communications device
US20060060069A1 (en) * 2004-09-23 2006-03-23 Nokia Corporation Method and device for enhancing ring tones in mobile terminals
US20060130636A1 (en) * 2004-12-16 2006-06-22 Samsung Electronics Co., Ltd. Electronic music on hand portable and communication enabled devices

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040204146A1 (en) * 2002-05-23 2004-10-14 Douglas Deeds Programming multiple ringing tones of a terminal
US20050107075A1 (en) * 2003-11-18 2005-05-19 Snyder Thomas D. Shuffle-play for a wireless communications device
US20060060069A1 (en) * 2004-09-23 2006-03-23 Nokia Corporation Method and device for enhancing ring tones in mobile terminals
US20060130636A1 (en) * 2004-12-16 2006-06-22 Samsung Electronics Co., Ltd. Electronic music on hand portable and communication enabled devices

Similar Documents

Publication Publication Date Title
US9672000B2 (en) Method and apparatus for generating an audio notification file
US10879863B2 (en) Automatic adjustments of audio alert characteristics of an alert device using ambient noise levels
US8749349B2 (en) Method apparatus and computer program
EP4027631A2 (fr) Système de réputation de téléphone collaboratif
US9497309B2 (en) Wireless devices and methods of operating wireless devices based on the presence of another person
CN1658182B (zh) 用于修改电子设备中通知的方法
US8285339B2 (en) Mobile communication terminal and method for performing automatic incoming call notification mode change
CA2539649C (fr) Systeme et methode pour synthese personnalisee texte-voix
WO2017045696A1 (fr) Systèmes, procédés et dispositifs pour la génération de sons de notification
WO2016026755A1 (fr) Systèmes, procédés et dispositifs pour la génération de sons de notification
US8897840B1 (en) Generating a wireless device ringtone
EP2431864B1 (fr) Procédé et appareil pour générer un fichier de notification audio
US8625774B2 (en) Method and apparatus for generating a subliminal alert
EP2912556A1 (fr) Envoi d'une sonnerie de vidéo
JP2007266803A (ja) 着信メロディ機能付通信機
US10142481B2 (en) Voicemail transmission utilizing signals associated with radio band frequencies
US20130117729A1 (en) Telecommunications application generator
EP2523177A2 (fr) Procédé d'apprentissage vocal au moyen d'un dispositif électronique
JPH10190781A (ja) オリジナル着信音生成機能付き携帯端末
TW424373B (en) Device for speech realization of caller's incoming message and process method thereof
CN116156045A (zh) 用于移动装置的具有隐身模式的通信系统
CN104363563A (zh) 一种基于网络的控制移动终端声音的方法和系统
JP3408465B2 (ja) 携帯端末装置
KR20090086764A (ko) 메시지에 따른 음원 데이터 출력 방법 및 단말
KR200347084Y1 (ko) 미디 음원을 이용한 단음벨 재생 장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15763895

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205N DATED 07.06.2018)

122 Ep: pct application non-entry in european phase

Ref document number: 15763895

Country of ref document: EP

Kind code of ref document: A1