WO2013088437A1 - System and method of translating digital sounds into mobile device vibrations - Google Patents

System and method of translating digital sounds into mobile device vibrations Download PDF

Info

Publication number
WO2013088437A1
WO2013088437A1 PCT/IL2012/050518 IL2012050518W WO2013088437A1 WO 2013088437 A1 WO2013088437 A1 WO 2013088437A1 IL 2012050518 W IL2012050518 W IL 2012050518W WO 2013088437 A1 WO2013088437 A1 WO 2013088437A1
Authority
WO
WIPO (PCT)
Prior art keywords
digital audio
audio file
sound
vibration pattern
user
Prior art date
Application number
PCT/IL2012/050518
Other languages
French (fr)
Inventor
Ohad SHEFFER
Kobi CALEV
Omri COHEN ALLORO
Original Assignee
Play My Tone Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Play My Tone Ltd. filed Critical Play My Tone Ltd.
Publication of WO2013088437A1 publication Critical patent/WO2013088437A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72442User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality for playing music files
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B06GENERATING OR TRANSMITTING MECHANICAL VIBRATIONS IN GENERAL
    • B06BMETHODS OR APPARATUS FOR GENERATING OR TRANSMITTING MECHANICAL VIBRATIONS OF INFRASONIC, SONIC, OR ULTRASONIC FREQUENCY, e.g. FOR PERFORMING MECHANICAL WORK IN GENERAL
    • B06B1/00Methods or apparatus for generating mechanical vibrations of infrasonic, sonic, or ultrasonic frequency
    • B06B1/02Methods or apparatus for generating mechanical vibrations of infrasonic, sonic, or ultrasonic frequency making use of electrical energy
    • B06B1/0207Driving circuits
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2220/00Input/output interfacing specifically adapted for electrophonic musical tools or instruments
    • G10H2220/091Graphical user interface [GUI] specifically adapted for electrophonic musical instruments, e.g. interactive musical displays, musical instrument icons or menus; Details of user interactions therewith
    • G10H2220/101Graphical user interface [GUI] specifically adapted for electrophonic musical instruments, e.g. interactive musical displays, musical instrument icons or menus; Details of user interactions therewith for graphical creation, edition or control of musical data or parameters
    • G10H2220/126Graphical user interface [GUI] specifically adapted for electrophonic musical instruments, e.g. interactive musical displays, musical instrument icons or menus; Details of user interactions therewith for graphical creation, edition or control of musical data or parameters for graphical editing of individual notes, parts or phrases represented as variable length segments on a 2D or 3D representation, e.g. graphical edition of musical collage, remix files or pianoroll representations of MIDI-like files
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2230/00General physical, ergonomic or hardware implementation of electrophonic musical tools or instruments, e.g. shape or architecture
    • G10H2230/005Device type or category
    • G10H2230/015PDA [personal digital assistant] or palmtop computing devices used for musical purposes, e.g. portable music players, tablet computers, e-readers or smart phones in which mobile telephony functions need not be used
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2230/00General physical, ergonomic or hardware implementation of electrophonic musical tools or instruments, e.g. shape or architecture
    • G10H2230/005Device type or category
    • G10H2230/021Mobile ringtone, i.e. generation, transmission, conversion or downloading of ringing tones or other sounds for mobile telephony; Special musical data formats or protocols therefor
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2240/00Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
    • G10H2240/325Synchronizing two or more audio tracks or files according to musical features or musical timings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M19/00Current supply arrangements for telephone systems
    • H04M19/02Current supply arrangements for telephone systems providing ringing current or supervisory tones, e.g. dialling tone or busy tone
    • H04M19/04Current supply arrangements for telephone systems providing ringing current or supervisory tones, e.g. dialling tone or busy tone the ringing-current being generated at the substations
    • H04M19/048Arrangements providing optical indication of the incoming call, e.g. flasher circuits

Definitions

  • the present invention is further directed to a system for translation of a sound track into a vibration pattern, that comprises:
  • the processor may be configured to upload at least one digital audio file to a webpage visible on at least one of the plurality of user devices and wherein the processor is adapted to analyze at least one digital audio file by at least one sound parameters, to form a sound analysis output and further to generate a vibration pattern (VP) in synchrony with the at least one digital audio file and to simultaneously output both the VP and the at least one digital audio files to at least one user device.
  • VP vibration pattern
  • a filter-bank of low-pass, midrange and high-pass filters that divides the signal into relevant frequency ranges, each of which including one or more instruments playing concurrently;
  • - Fig. 4 illustrates the display screen of a mobile device which exploits the translation of sounds into vibrations, in accordance with an embodiment of the present invention.
  • Fig. 1 illustrates a system for translation of sounds into vibrations, in accordance with an embodiment of the present invention.
  • System 100 typically includes a server 110, which may include one or a plurality of servers and one or more control computer terminals 112 for programming, trouble-shooting servicing and other functions.
  • Server 110 is linked to a data network, such as the Internet 120 through link 122, for running system website for automatic audio loop detection and generation 123 and for communication with the users.
  • Users 198 may communicate with the server through a plurality of user terminal devices 130, which may be portable terminal devices, small hand-held terminal devices and mobile phones, that are linked to the Internet 120 through a plurality of links 124.
  • the Internet link of each of terminal devices 130 may be direct through a landline or a wireless line, or may be indirect, for example through an intranet that is linked through an appropriate server to the Internet.
  • the user may download the audio file to cellphone 140 and/or to his/her terminal device 130. Additionally or alternatively, the user may send the audio file from the cellphone/computer to his/her cellphone/computer or to one or more recipient cellphone s/terminal devices. Also, it should be noted that the invention is not limited to the user-associated communication devices - terminal devices and portable and mobile communication devices - and a variety of others such as an interactive television system may also be used.
  • the system 100 may also include at least one call and/or user center 160.
  • the support center 160 typically provides both on-line and offline technical support services to users.
  • the server system 110 is configured according to the invention to carry out the abovedescribed method for translation of sounds into vibrations.
  • a set of preprocessors that run in parallel is applied to the audio file.
  • the preprocessing results are scored according to both the periodicity and high variance inside a musical bar.
  • Such a set may be a filter-bank of low-pass, midrange and high-pass filters that divides the signal into relevant bands. Separating between frequency ranges allows focusing on different aspects of the played sound. For example, the range below 200 Hz includes bass-guitars. The range between 2-4.5 KHz includes vocals and leading instruments, such as guitars and piano. The range between 4.5-8 KHz includes more rhythmical sounds, such as percussive sounds and strumming.
  • Fig. 3 illustrates the use of a similarity matrix for extracting similarity between bars of a sound track, which may then be associated with particular instruments.
  • the similarity matrix is based on MFCCs of the processed sound.
  • the sound track is the song "Eye of the Tiger" (by the rock band "Survivor") and the frame length is 30 mS (shorter than a note).
  • the first square 30a on the mail diagonal that is greater than one pixel (3x3 pixels) indicates that at this point there is similarity, which in this case is a chord of the first leading guitar strum.
  • Square 30a represents a time period of about 0.1 Sec of this chord.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Auxiliary Devices For Music (AREA)

Abstract

The present invention is directed to a method for translation of a sound track into a vibration pattern, according to which a digital audio file (such as a music file) containing the sound track is analyzing by a sound parameter, to form a sound analysis output. A vibration pattern (VP) is generated in synchrony with the digital audio file and both the VP and the digital audio file are simultaneously output, to enhance the musical experience. The vibration pattern is generated by converting sound of the digital audio file into the vibration pattern, which is used as a control signal that is forwarded to an appropriate Application Programming Interface (API) of the operating system of a mobile device, to control the vibration functionality of the mobile device.

Description

SYSTEM AND METHOD OF TRANSLATING DIGITAL SOUNDS INTO MOBILE DEVICE VIBRATIONS
Field of the Invention
The invention disclosed herein relates generally to method and system for translating digital sounds into representative vibration patterns and allowing a user to simultaneously hear music and feel synchronized vibration patterns on his mobile device, using a mobile device vibration feature.
Background of the Invention
Modern mobile devices allow developers to develop applications that use the device's inbuilt components, such as a vibration element (generally used to provide an alternative or an additional alert, when the ringtone of the mobile device is disabled). Currently, the vibration element is used for applications such as massage simulation, electric shaving machine simulation and other applications which use the device's ability to vibrate.
US7241946, US7280647, US2006194626, US2009006338 US2008043642 and WO11010322 disclose methods for exploiting the inherent vibration feature for uses other than providing alerts to the user. However, none of these prior art methods has suggested enhancing the experience of listening to music while feeling synchronized vibration patterns, along with hearing sounds.
It is an object of the present invention to provide a method and system for translating digital sounds into representative vibration patterns and allowing a user to simultaneously hear music and feel synchronized vibration patterns on his mobile device, using the inherent mobile device's vibration feature. Summary of the Invention
The present invention is directed to a method for translation of a sound track into a vibration pattern, according to which a digital audio file (such as a music file) containing the sound track is analyzing by a sound parameter to form a sound analysis output. A vibration pattern (VP) is generated in synchrony with the digital audio file and both the VP and the digital audio file are simultaneously output, to enhance the musical experience. The vibration pattern is generated by converting sound of the digital audio file into the vibration pattern.
The VP and the digital audio file, which may be selected by a user, may be output on a user mobile device, such as a cellphone. These files may be uploaded by the user on his mobile device.
The digital audio file may be analyzed according to the tempo, timbre and energy of the audio signal.
The vibration pattern may be proportional or inversely proportional to a sound parameter of the digital audio file. And the VP may be output in synchrony with the digital audio file.
The present invention is also directed to computer software for translation of a sound track into a vibration pattern. The software comprises a computer-readable medium which is adapted for:
a. analyzing at least one digital audio file according to at least one sound parameter, to form a sound analysis output; b. generating a vibration pattern (VP) in synchrony with the at least one digital audio file; and
c. simultaneously outputting both the VP and the at least one digital audio file.
Outputting both the VP and the digital audio file may be implemented in a user mobile device.
The present invention is further directed to a system for translation of a sound track into a vibration pattern, that comprises:
a. a processor;
b. a plurality of user devices; and
c. a data network connecting between the processor and the plurality of user devices.
The processor may be configured to upload at least one digital audio file to a webpage visible on at least one of the plurality of user devices and wherein the processor is adapted to analyze at least one digital audio file by at least one sound parameters, to form a sound analysis output and further to generate a vibration pattern (VP) in synchrony with the at least one digital audio file and to simultaneously output both the VP and the at least one digital audio files to at least one user device.
In one aspect, the vibration pattern is generated by:
a) receiving a segment of the sound track;
b) applying a set of preprocessors that run in parallel to the audio file; c) scoring the preprocessing results according to both the periodicity and non-uniform distribution inside a musical bar; d) constructing note-scale similarity matrices, based on metrics that represent local musical timbre (such as MFCC, or various measures on the auto-correlation signal);
e) applying thresholding such that each line in a similarity matrix represents a binary vector that corresponds to a certain underlying musical element in the sound track;
f) selecting from the similarity matrix a repetitive line having un-even or non-uniform distribution within a musical-bar;
g) applying a downbeat location process and scoring the results by preferring lines with higher syncopation; and
h) projecting clusters of several pixels after thresholding the pixels values having the highest score.
The preprocessors may be selected from the group consisting of:
a) a filter-bank of low-pass, midrange and high-pass filters that divides the signal into relevant frequency ranges, each of which including one or more instruments playing concurrently;
b) a full range filter for detecting a repetitive diverse segment; and c) detecting stationary high-spectral spread events, being non-percussive events with stable frequency components that spread across a wide frequency range.
Brief Description of the Drawings
The invention will now be described in connection with certain preferred embodiments with reference to the following illustrative figures so that it may be more fully understood.
In the drawings: Fig. 1 is a simplified pictorial illustration showing a system for translation of sounds into vibrations, in accordance with an embodiment of the present invention;
Fig. 2 is a simplified flowchart of a method for translation of sounds into vibrations, in accordance with an embodiment of the present invention;
Fig. 3 illustrates the use of a similarity matrix for extracting similarity between bars of a sound track, which may then be associated with particular instruments; and
- Fig. 4 illustrates the display screen of a mobile device which exploits the translation of sounds into vibrations, in accordance with an embodiment of the present invention.
Detailed Description of the Embodiments of the Invention
Fig. 1 illustrates a system for translation of sounds into vibrations, in accordance with an embodiment of the present invention. System 100 typically includes a server 110, which may include one or a plurality of servers and one or more control computer terminals 112 for programming, trouble-shooting servicing and other functions. Server 110 is linked to a data network, such as the Internet 120 through link 122, for running system website for automatic audio loop detection and generation 123 and for communication with the users. Users 198 may communicate with the server through a plurality of user terminal devices 130, which may be portable terminal devices, small hand-held terminal devices and mobile phones, that are linked to the Internet 120 through a plurality of links 124. The Internet link of each of terminal devices 130 may be direct through a landline or a wireless line, or may be indirect, for example through an intranet that is linked through an appropriate server to the Internet.
System 100 may also operate through communication protocols between terminal devices over the Internet or other data networks, such as cellular or wireless data networks. Users may also communicate with the system through portable communication devices such as 3rd generation mobile phones 140, communicating with the Internet through a corresponding communication system (e.g. cellular system) 150 connectable to the Internet through link 152. As will readily be appreciated, this is a very simplified description, although the details should be clear to the artisan. Users typically download one or more audio files 199 from website 123. System 100 further comprises software 112 for analysis of sound tracks. These software packages may be located, for example in server utility 110 or elsewhere in the system.
The user may download the audio file to cellphone 140 and/or to his/her terminal device 130. Additionally or alternatively, the user may send the audio file from the cellphone/computer to his/her cellphone/computer or to one or more recipient cellphone s/terminal devices. Also, it should be noted that the invention is not limited to the user-associated communication devices - terminal devices and portable and mobile communication devices - and a variety of others such as an interactive television system may also be used. The system 100 may also include at least one call and/or user center 160. The support center 160 typically provides both on-line and offline technical support services to users. The server system 110 is configured according to the invention to carry out the abovedescribed method for translation of sounds into vibrations. Many mobile devices, such as cellphone 140 in the system comprise a display screen 142 and a vibrator element 144, as is known in the art. Reference is now made to Fig. 2, which is a simplified flowchart of a method for translation of sounds into vibrations, in accordance with an embodiment of the present invention.
At a first step 202, a user chooses digital sound files to be uploaded onto his device. The user can listen to sound files/track provided by the system on his device and select the one or more desired sound files. At the next step 204, system 100 is operative to analyze the selected sound file(s) by sound parameters (wavelength, resonance, tone, pitch etc.) and form Sound Analysis Output (SAO) by software 112 in computer 110. The audio tracks and sound analysis outputs may be stored in a database 111 or in memory 114.
At the next step 206, system 100 is operative to create, in parallel time, a vibration pattern (VP) in the form of a vibration track, which is in synchrony and in proportion to the generated SAO.
Not every sound track has a natural underlying rhythmical track suitable for generating a corresponding vibration track. The sound track is analyzed and if such an underlying track exists, possible vibration tracks are constructed, while scoring each one of them using a suitable scale of how interesting (yet stable along bar units) they are, so as to determine whether or not a vibration track may be suitable for the analyzed sound track. The system proposed by the present invention outputs a possible vibration track at any rate, and scores various segments from the sound track to be preferred candidates for vibration generation, while suggesting relevant parts to the user. Such preferred candidate segments are possibly the most interesting parts of a musical composition, with high rhythmical salience. It is also possible to use the rhythmic signal that is extracted using the method proposed by the present invention, as a control (or triggering) signal for obtaining other effects that are associated with the sound track. These effects may include:
1. Adding another musical layer to the sound track or applying a filter at these specific time intervals.
2. Using the control signal as a gate in a sound-chain, such as a gated reverb (an audio processing technique that is applied to recordings of drums, or live sound reinforcement of drums in a PA system, to make the drums sound powerful and "punchy," while keeping the overall mix clean and transparent-sounding. The gated reverb effect is made using a combination of strong reverb and a noise gate).
3. Using the extracted time intervals as samples in a sample bank.
4. Using the control signal as control-points for scratching hooks (a DJ or a turntablist, who is an expert of manipulating sounds and creating music using phonograph turntables and a DJ mixer, use the scratching technique to produce distinctive sounds by moving a vinyl record back and forth on a turntable while optionally manipulating the crossfader on a DJ mixer. Crossfading is a technique that creates a smooth transition from one sound to another).
5. Using the scoring function of the extracted rhythmic sections to identify interesting segments of a song.
The generated vibration-track and/or the extracted rhythmic signal may also be used as triggers for other control signals (to initiate an extra musical channel, a visual cue, etc.). In order to generate a vibration track, at the first step, the user selects a track segment of a recorded musical composition.
At the next step, a set of preprocessors that run in parallel is applied to the audio file. At the next step, the preprocessing results are scored according to both the periodicity and high variance inside a musical bar. Such a set may be a filter-bank of low-pass, midrange and high-pass filters that divides the signal into relevant bands. Separating between frequency ranges allows focusing on different aspects of the played sound. For example, the range below 200 Hz includes bass-guitars. The range between 2-4.5 KHz includes vocals and leading instruments, such as guitars and piano. The range between 4.5-8 KHz includes more rhythmical sounds, such as percussive sounds and strumming.
A full range signal (no filtering) is also taken, so as to detect a diverse segment but in the same time, also repetitive.
Another preprocessor seeks stationary high-spectral spread events, which are non-percussive events with stable frequency components that spread across a wide frequency range. These events are possible unison moments, where several instruments play at the same time (e.g., orchestra-hits, section hits, non-arpeggiated chords, clusters, etc.), in contrast to percussive events which are also broad-band but have chirp-like qualities. Percussive events that leak into unison-hits may be tolerated, if they are part of the unison.
At the next step, note-scale similarity matrices are constructed based on metrics that represent local musical timbre. At the next step, thresholding is applied, such that each line in a similarity matrix represents a binary vector of a possible vibration track that corresponds to a certain underlying musical element in the original track.
At the next step, a desired line is selected from the similarity matrix. The selected line should be repetitive (its auto-correlation will get a high periodicity score) and its distribution within a musical-bar should be uneven or non-uniform (measured by a relatively high variance of the time gaps between the event onsets). After a downbeat location process is applied, further scoring may be performed by preferring lines with higher syncopation (mapping the segment into a meter grid and giving higher score to even indexes).
Fig. 3 illustrates the use of a similarity matrix for extracting similarity between bars of a sound track, which may then be associated with particular instruments. The similarity matrix is based on MFCCs of the processed sound. In this example, the sound track is the song "Eye of the Tiger" (by the rock band "Survivor") and the frame length is 30 mS (shorter than a note). Here, the first square 30a on the mail diagonal that is greater than one pixel (3x3 pixels) indicates that at this point there is similarity, which in this case is a chord of the first leading guitar strum. Square 30a represents a time period of about 0.1 Sec of this chord. Rectangle 31 in the similarity matrix includes a horizontal line with several similar 3x3 pixel squares 30b-30d, which indicate the periodicity of this 0.1 Sec segment along the x axis. Also, rectangle 32 includes 3 more similar lines 32a-32c with 3x3 pixel squares, which indicate that there is periodicity of more chords of the same electric guitar. Any of these other 3 lines, could be chosen as well, as they represent the same timbre. On the other hand, rectangle 33 in the similarity matrix includes a horizontal line with several similar 11x11 pixel squares 30e-30g, which indicate another periodicity. However, these dark squares represent chords of a different rhythm-guitar with more even and uniform distribution within the musical-bar and therefore, will get a lower score.
Line 35 represents the output obtained from the similarity matrix in the form of the projection of the dark areas in the chosen line 34 confined within rectangle 31. The similar timbre extracted from the line, is projected after thresholding the pixels values, so as to get a logic representation (a binary vector) of the appearance of the first lead guitar (logic "1" indicates the presence of first leading guitar in the corresponding segment, while logic "0" indicates the absence of first leading guitar in the corresponding segment).
At the next step, the exact time locations of the vibrations are adjusted, so as to locate local maximum points in note-onset events.
The metrics used to construct the similarity matrices are various timbre measures taken on short time by collecting features from the autocorrelation function, such as the ratio between the auto-correlation central-peak with the following peak, linear fitting curve angle, and other features from its short-time spectral image, such as the spectral centroid and spread.
At the next step 208, system 100 is operative to output both VP and the chosen digital sound files. The synchronous output may be generated at the mobile device 140 or additionally or alternatively at user terminal device 130. The control signal that corresponds to the generated VP is forwarded to an appropriate Application Programming Interface (API) of the operating system of the mobile device, which is normally an inherent feature of the operating system that allows controlling the vibration functionality of the mobile device by external means (in this case, the control signal).
The system may simulate the air movements, naturally caused by the sound and activates the vibration element 144 so the user can feel distinctive sounds such as bass, guitar distortion, drum beat and other sounds which create dominant air movements that the human ear translates into sounds.
It should be understood that the same methodology may be applied to voice patterns and translated into a synchronous vibration pattern. The vibration pattern may be proportional to, inversely proportional to, or have some other functional relationship to the audio pattern of the music/voice sound.
At the next step 210, the user feels the VP from the vibrator element 144 and hears digital sound files from his terminal device, in synchrony.
Reference is now made to Fig. 3, which illustrates the display screen of a mobile device 140. The user can listen to an audio track while watching the audio pattern 302 on display 142 of his device mobile 140. The screen may include virtual buttons 306, 308 and a time-line 304.
While some embodiments of the invention have been described by way of illustration, it will be apparent that the invention can be carried out with many modifications, variations and adaptations, and with the use of numerous equivalents or alternative solutions that are within the scope of persons skilled in the art, without exceeding the scope of the claims.

Claims

Claims:
1. A method for translation of a sound track into a vibration pattern, comprising:
a) analyzing at least one digital audio file containing said sound track by at least one sound parameters to form a sound analysis output;
b) generating a vibration pattern (VP) in synchrony with said at least one digital audio file; and
c) simultaneously outputting both said VP and said at least one digital audio files.
2. A method according to claim 1, wherein the digital audio file comprises at least one music file.
3. A method according to claim 1, wherein the vibration pattern is generated by converting sound of the digital audio file into said vibration pattern.
4. A method according to claim 1, wherein the VP and the digital audio file are output on a user mobile device, such that said VP is used as a control signal that is forwarded to the operating system of said mobile device, to control its vibration functionality.
5. A method according to claim 4, wherein the digital audio file is selected by a user.
6. A method according to claim 5, wherein the at least one digital audio file is at least one digital music file uploaded by the user on the user mobile device.
7. A method according to claim 1, wherein the digital audio file is analyzed according to at least one of a tempo, timbre and energy.
8. A method according to claim 1, wherein the vibration pattern is proportional or inversely proportional to a sound parameter of the digital audio file.
9. A method according to claim 1, wherein the VP is output in synchrony with the digital audio file.
10. A computer software for translation of a sound track into a vibration pattern, comprising a computer-readable medium adapted for:
a) analyzing at least one digital audio file according to at least one sound parameter, to form a sound analysis output;
b) generating a vibration pattern (VP) in synchrony with said at least one digital audio file; and
c) simultaneously outputting both said VP and said at least one digital audio file.
11. A computer software according to claim 10, wherein outputting the VP and the digital audio file is implemented in a user mobile device.
12. A system for translation of a sound track into a vibration pattern, comprising:
a) a processor;
b) a plurality of user devices;
c) a data network connecting between the processor and the plurality of user devices,
wherein the processor is configured to upload at least one digital audio file to a webpage visible on at least one of said plurality of user devices and wherein the processor is adapted to analyze at least one digital audio file by at least one sound parameters, to form a sound analysis output and further to generate a vibration pattern (VP) in synchrony with said at least one digital audio file and to simultaneously output both said VP and said at least one digital audio files to at least one user device.
13. A method according to claim 1, wherein the vibration pattern is generated by:
a) receiving a segment of the sound track;
b) applying a set of preprocessors that run in parallel to the audio file; c) scoring the preprocessing results according to both the periodicity and non-uniform distribution inside a musical bar;
d) constructing note-scale similarity matrices, based on metrics that represent local musical timbre;
e) applying thresholding such that each line in a similarity matrix represents a binary vector that corresponds to a certain underlying musical element in the sound track;
f) selecting from the similarity matrix a repetitive line having un-even or non-uniform distribution within a musical-bar;
g) applying a downbeat location process and scoring the results by preferring lines with higher syncopation; and
h) projecting clusters of several pixels after thresholding the pixels values having the highest score.
14. A method according to claim 13, wherein the metrics used to construct the similarity matrices are measures on a note-event's auto-correlation function.
15. A method according to claim 13, wherein the preprocessors are selected from the group consisting of: a) a filter-bank of low-pass, midrange and high-pass filters that divides the signal into relevant frequency ranges, each of which including one or more instruments playing concurrently;
b) a full range filter for detecting a repetitive diverse segment; and c) detecting stationary high-spectral spread events, being non-percussive events with stable frequency components that spread across a wide frequency range.
16. A method according to claim 13, wherein the similarity matrix is based on MFCCs of the processed sound.
PCT/IL2012/050518 2011-12-12 2012-12-11 System and method of translating digital sounds into mobile device vibrations WO2013088437A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201161569394P 2011-12-12 2011-12-12
US61/569,394 2011-12-12

Publications (1)

Publication Number Publication Date
WO2013088437A1 true WO2013088437A1 (en) 2013-06-20

Family

ID=48611946

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2012/050518 WO2013088437A1 (en) 2011-12-12 2012-12-11 System and method of translating digital sounds into mobile device vibrations

Country Status (1)

Country Link
WO (1) WO2013088437A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018155926A1 (en) * 2017-02-23 2018-08-30 삼성전자주식회사 Method and apparatus for providing vibration in electronic device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2380908A (en) * 2000-11-21 2003-04-16 Nec Corp Sounding music accompanied by vibration, for eg a phone terminal
EP1919158A1 (en) * 2006-11-03 2008-05-07 LG Electronics Inc. Broadcasting terminal and method of controlling vibration of a mobile terminal
US20100148942A1 (en) * 2008-12-17 2010-06-17 Samsung Electronics Co., Ltd. Apparatus and method of reproducing content in mobile terminal

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2380908A (en) * 2000-11-21 2003-04-16 Nec Corp Sounding music accompanied by vibration, for eg a phone terminal
EP1919158A1 (en) * 2006-11-03 2008-05-07 LG Electronics Inc. Broadcasting terminal and method of controlling vibration of a mobile terminal
US20100148942A1 (en) * 2008-12-17 2010-06-17 Samsung Electronics Co., Ltd. Apparatus and method of reproducing content in mobile terminal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JAKUB GLACZYNSKI ET AL.: "Automatic Music Summarization. A ''Thumbnail'' Approach''", ARCHIVES OF ACOUSTICS, 27 November 2009 (2009-11-27), POZNAN UNIVERSITY OF TECHNOLOGY, pages 297 - 309, XP003031238 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018155926A1 (en) * 2017-02-23 2018-08-30 삼성전자주식회사 Method and apparatus for providing vibration in electronic device
US11198154B2 (en) 2017-02-23 2021-12-14 Samsung Electronics Co., Ltd. Method and apparatus for providing vibration in electronic device

Similar Documents

Publication Publication Date Title
US9928835B1 (en) Systems and methods for determining content preferences based on vocal utterances and/or movement by a user
US9542917B2 (en) Method for extracting representative segments from music
US9330546B2 (en) System and method for automatically producing haptic events from a digital audio file
WO2019137115A1 (en) Music classification method and beat point detection method, storage device and computer device
CN101421707B (en) System and method for automatically producing haptic events from a digital audio signal
US9239700B2 (en) System and method for automatically producing haptic events from a digital audio signal
MX2011012749A (en) System and method of receiving, analyzing, and editing audio to create musical compositions.
CN104395953A (en) Evaluation of beats, chords and downbeats from a musical audio signal
US20200043453A1 (en) Multiple audio track recording and playback system
CN106383676B (en) Instant photochromic rendering system for sound and application thereof
JP2008538827A (en) Audio data automatic generation method and user terminal and recording medium using the same
TW202006613A (en) Method and device for training adaptation level evaluation model, and method and device for evaluating adaptation level
Lim et al. An audio-haptic feedbacks for enhancing user experience in mobile devices
KR102212409B1 (en) Method and apparatus for generating audio signal and vibration signal based on audio signal
EP2186315A1 (en) Method for automatically composing a personalized ring tone from a hummed voice recording and portable telephone implementing this method
CN112669811A (en) Song processing method and device, electronic equipment and readable storage medium
WO2013088437A1 (en) System and method of translating digital sounds into mobile device vibrations
CN112420006B (en) Method and device for operating simulated musical instrument assembly, storage medium and computer equipment
CN108885878A (en) Improved method, device and system for embedding data in a stream
WO2020249870A1 (en) A method for processing a music performance
US11899713B2 (en) Music streaming, playlist creation and streaming architecture
JP2014109603A (en) Musical performance evaluation device and musical performance evaluation method
US20240296183A1 (en) Visual content selection system and method
CN116932810A (en) Music information display method, device and computer readable storage medium
CN116229996A (en) Audio production method, device, terminal, storage medium and program product

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12857837

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 01/10/2014)

122 Ep: pct application non-entry in european phase

Ref document number: 12857837

Country of ref document: EP

Kind code of ref document: A1