USRE47948E1 - Content playback method and content playback apparatus - Google Patents

Content playback method and content playback apparatus Download PDF

Info

Publication number
USRE47948E1
USRE47948E1 US14/601,569 US200514601569A USRE47948E US RE47948 E1 USRE47948 E1 US RE47948E1 US 200514601569 A US200514601569 A US 200514601569A US RE47948 E USRE47948 E US RE47948E
Authority
US
United States
Prior art keywords
content
output
timing
user
movement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US14/601,569
Inventor
Motoyuki Takai
Toshiro Terauchi
Yoichiro Sako
Yasushi Miyajima
Kosei Yamashita
Makoto Inoue
Masamichi Asukai
Katsuya Shirai
Kenichi Makino
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Priority to US14/601,569 priority Critical patent/USRE47948E1/en
Application granted granted Critical
Publication of USRE47948E1 publication Critical patent/USRE47948E1/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/36Accompaniment arrangements
    • G10H1/40Rhythm
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0686Timers, rhythm indicators or pacing apparatus using electric or electronic means
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/005Reproducing at a different information rate from the information rate of recording
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/102Programmed access in sequence to addressed parts of tracks of operating record carriers
    • G11B27/105Programmed access in sequence to addressed parts of tracks of operating record carriers of operating discs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B31/00Arrangements for the associated working of recording or reproducing apparatus with related apparatus
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B31/00Arrangements for the associated working of recording or reproducing apparatus with related apparatus
    • G11B31/02Arrangements for the associated working of recording or reproducing apparatus with related apparatus with automatic musical instruments
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/375Tempo or beat alterations; Music timing control
    • G10H2210/391Automatic tempo adjustment, correction or control

Definitions

  • the present invention relates to a method and an apparatus for playing back content, such as music or video.
  • a device that can change content, such as music, in real time in accordance with a current situation of a user is now being considered.
  • the tempo or pitch of music is changed in accordance with a current situation of a user and short sections of music materials are spliced to create one piece of continuous music.
  • the device plays the role of improvising a new piece of music by combining existing pieces of music, which is conventionally conducted by a DJ (Disk Jockey) or VJ (Video Jockey).
  • the above-described step game is applied to rehabilitation of patients.
  • the heart rate of a user is detected and, if the detected heart rate exceeds a reference heart rate, an alarm is issued.
  • content playback does not react to the movement of a user, but the user acts in accordance with content to be played back, such as stepping in tune with music to be played back.
  • the entertainment characteristic or uplifting spirits derived from content playback depends on the content itself, and is not sufficient.
  • This invention has been made to enable the elevation of the entertainment characteristic or uplifting spirits derived from content playback by allowing users' movements to affect the content playback.
  • a movement of a user is detected, and content is played back by allowing the timing of the content to be synchronized with a feature, a cycle, or a rhythm of the movement.
  • the timing of content to be played back is synchronized with the timing at which the user places his/her foot on the ground during walking, or the timing at which the user shakes or changes from swinging in one direction to swinging in other direction. This makes the user who is moving while appreciating or sensing the content feel good, so that the entertainment characteristic and uplifting spirits derived from the content playback itself can be enhanced.
  • FIG. 1 illustrates an example of a content playback apparatus of this invention.
  • FIG. 2 illustrates an example of an operation performed by the content playback apparatus shown in FIG. 1 .
  • FIG. 3 illustrates the functional configuration of the content playback apparatus shown in FIG. 1 .
  • FIG. 4 illustrates a case where short music pieces in units of bars are used as content materials.
  • FIG. 5 illustrates a first example of a music content playback method.
  • FIG. 6 illustrates an example of processing when the method shown in FIG. 5 is performed.
  • FIG. 7 illustrates a second example of a music content playback method.
  • FIG. 8 illustrates an example of processing when the method shown in FIG. 7 is performed.
  • FIG. 9 illustrates an example of a body movement detector.
  • FIG. 10 illustrates an example of a walking sensor.
  • FIG. 11 illustrates an example of a strain gauge.
  • FIG. 12 illustrates an example of an output signal from a walking sensor.
  • FIG. 13 illustrates another example in which a walking sensor is disposed.
  • FIG. 14A illustrates an example when the movement of a head is detected.
  • FIG. 14B illustrates an example when the movement of a head is detected.
  • FIG. 14C illustrates an example when the movement of a head is detected.
  • FIG. 15 illustrates an example when the movement of a hand is detected.
  • FIGS. 1 through 3 Embodiment of Content Playback Apparatus
  • FIG. 1 illustrates an example of a content playback apparatus of this invention, which is configured as a portable music/video playback device or a cellular telephone terminal.
  • a content playback apparatus 10 of this example includes a CPU 11 .
  • a ROM 13 into which various programs and data are written and a RAM 14 into which programs and data are expanded are connected to a bus 12 .
  • a built-in recording device 15 such as a hard disk, is connected to the bus 12 with an interface 16 therebetween, and a removable recording medium 17 , such as a CD or a DVD, is connected to the bus 12 with an interface 18 therebetween.
  • a duplexer antenna 22 is connected to the bus 12 with an RF interface 21 , such as an RF duplexer circuit, and an external interface 23 for connecting the content playback apparatus 10 to, for example, the Internet 24 , is connected to the bus 12 .
  • the recording device 15 and the recording medium 17 record or play back content, such as music or video, or content materials discussed below.
  • the RF interface 21 and the duplexer antenna 22 wirelessly receive or send content or content materials from or to an external source.
  • the external interface 23 receives or sends content or content materials from or to an external source via the Internet 24 .
  • a video display device 32 including, for example, a liquid crystal device
  • a sound output device 34 including, for example, a speaker and a headphone
  • On the display screen of the video display device 32 video, which serves as content, or a setting or operating screen, is displayed.
  • Music which serves as content, or sound, such as voice announcement, is output from the sound output device 34 .
  • An operation unit 41 including various keys is connected to the bus 12 with an interface 42 therebetween, and a sound input microphone 43 is connected to the bus 12 with a sound processor 44 therebetween.
  • a body movement sensor 51 is connected to the bus 12 with an A/D converter 52 therebetween.
  • a user's movement which is discussed below, is detected by the body movement sensor 51 and a corresponding movement detection signal is converted from an analog signal into digital data by the A/D converter 52 and is sent to the bus 12 .
  • the content in this invention includes all subjects that can be appreciated or sensed by users, such as music, sound other than music, moving pictures, still images (including graphics, drawings, and characters), vibrations, and light, such as illumination light.
  • the content is music or video.
  • the timing at which the user 1 places his/her foot on the ground while running is detected, as shown in FIG. 2 , by a walking sensor, which serves as the body movement sensor 51 , fixed at a shoe worn by the user 1 or a foot of the user 1 .
  • music is played back by allowing the timing of beats or bars (measures) of a piece of music to be synchronized with the timing at which the user 1 places his/her foot on the ground, which is discussed below.
  • a body movement detector 61 which is formed of the body movement sensor 51 and the A/D converter 52 , detects user's movements, for example, the movements of the feet while the user is running or the movements of the head while the user is listening to music.
  • a body movement analyzer 62 which is formed of the CPU 11 , the ROM 13 , and the RAM 14 , analyzes movement detection data that is sent to the bus 12 from the body movement detector 61 to detect a feature, a cycle, or a rhythm (tempo) of the body movement of the user.
  • the feature of the body movement includes the timing of a start point (point at which the user starts to move the body from the state in which the user is still), an end point (point at which the user starts to bring the body into a standstill from the state in which the user is moving), a maximal point, a minimal point, a maximum peak point, and a minimum peak point of the body movement, and more specifically, the timing at which the user places his/her foot on the ground during walking, or the timing at which the user shakes or changes from swinging in one direction to swinging in another direction during walking.
  • detecting the body movement includes predicting the next movement from the previous movement detection result, such as predicting the next timing at which the user places his/her foot on the ground or changes from swinging in one direction to swinging in another direction.
  • a content generator 63 which is formed of the CPU 11 , the ROM 13 , the RAM 14 , and the video processor 31 or the sound processor 33 , drives a content material reader 64 to read out content materials and accompanying timing information from a content material database 65 on the basis of the above-described body movement detection result to generate content in real time.
  • the content material reader 64 is formed of the CPU 11 , the ROM 13 , the RAM 14 , the interfaces 16 and 18 , the RF interface 21 , and the external interface 23 .
  • the content material database 65 is a database provided in the recording device 15 or the recording medium 17 on which content material data and timing information are recorded, or a server that sends content material data and timing information.
  • a content output unit 66 outputs content generated by the content generator 63 , and is formed of the video display device 32 or the sound output device 34 .
  • the content materials are materials for generating final content, and may be normal content (music data or video data recorded on CDs or DVDs, music data compressed by MP3, etc.). They are, however, preferably short music pieces in the units of bars (several bars), or short video in units of scenes or cuts.
  • Timing information indicating timing is added as meta information to each piece of content material data, and then, the content material data with the timing information is recorded or sent. Alternatively, timing information may be generated simultaneously with the reading of content materials.
  • content materials are short music pieces in the units of bars or short video in the units of scenes or cuts, such content materials can be spliced after deciding the temporal order of the content materials, and if necessary, the playback time duration of the content materials is expanded or shortened to generate one piece of content in real time.
  • FIGS. 4 through 8 Embodiment of Content Playback Method
  • content is played back by allowing the playback timing of content to be synchronized with a feature, a cycle, or a rhythm of user's movements on the basis of the above-described timing information.
  • the feature of the movement includes a start point, an end point, a maximal point, a minimal point, a maximum peak point, or a minimum peak point of the body movement, and more specifically, the timing at which the user places his/her foot on the ground during running or the timing at which the user changes from swinging in one direction to swinging in another direction during running.
  • one bar of a music piece is used as each of four content materials A, B, C, and D is used, and the content materials A, B, C, and D are spliced to create one piece of music content.
  • Sections including the content materials A through D may be MIDI or PCM (pulse width modulation) data. If they are MIDI data, beat timing information can be directly obtained, and also, the amount of computation required for expanding or shortening the content materials for generating content, which is discussed below, is small. If they are PCM data, the content materials A through D are pure sound waveforms, and thus, beat timing information is generated beforehand separately from the content materials A through D and is recorded or sent together with the content materials A through D.
  • MIDI data beat timing information can be directly obtained, and also, the amount of computation required for expanding or shortening the content materials for generating content, which is discussed below, is small.
  • PCM data the content materials A through D are pure sound waveforms, and thus, beat timing information is generated beforehand separately from the content materials A through D and is recorded or sent together with the content materials A through D.
  • Beat timing information may be calculated from the waveforms of content materials immediately before playing back the content materials.
  • the method disclosed in reference document 1 (as a PDF file, Masataka Goto: An Audio-based Real-time Beat Tracking System for Music With or Without Drum-sounds, Journal of New Music Research, Vol. 30, No. 2, pp. 159-171, June 2001) or reference document 2 (as a book, Masataka Goto: Haku Setsu Ninshiki (Beat/Bar Recognition (Beat Tracking)), Bit Special Number, Konputer to Ongaku no Sekai (World of Computer and Music) . . . Kiso kara Furontia made (From Base to Frontier), pp. 100-116, Kyoritsu Shuppan Co., Ltd. August 1998) can be used.
  • FIG. 5 illustrates a first example of a method for synchronizing the timing of content with the walking tempo.
  • the method in this example is suitable when the walking tempo is constant, and the start timing of each content material is synchronized with the timing at which the user places his/her foot on the ground by expanding or shortening the content materials.
  • the head of each of the bars A through D is synchronized, every four steps, with the timing at which the user places his/her foot on the ground.
  • the number of steps to be synchronized with a bar is four, the head of each of the bars A through D can be synchronized with the timing at which the user places his/her foot on the ground without the need to considerably expand or shorten the content materials.
  • the first bar A is played back as it is, i.e., for time Tm.
  • time T 1 for four steps from the first step to the fifth step is calculated.
  • the time D 1 between a playback end time ta of the first bar A and the time at which the fifth step is detected is calculated to determine the difference (T 1 ⁇ D 1 ) between the time T 1 and the time D 1 .
  • the bar B is then expanded or shortened by a factor of (T 1 ⁇ D 1 )/Tm and is played back so that the difference (T 1 ⁇ D 1 ) becomes equal to the playback duration of the second bar B.
  • the example shown in FIG. 5 is a case where the playback of the first bar A is finished after the time at which the fifth step is detected, which means D 1 >0, and the difference (T 1 ⁇ D 1 ) is shorter than the time T 1 or the time Tm, which means (T 1 ⁇ D 1 )/Tm ⁇ 1, and the bar B is thus shortened and played back.
  • FIG. 6 illustrates content playback processing executed by the CPU 11 of the content playback apparatus 10 shown in FIG. 1 , i.e., content generation processing executed by the content generator 63 shown in FIG. 3 , when the content playback method in the above-described example is employed.
  • n indicates the number of steps, different from n shown in FIG. 5
  • m indicates the content material number.
  • m 1, 5, 9 . . .
  • m 4, 8, 12 . . . .
  • step 102 walking sensing is started using the body movement sensor 51 .
  • step 111 determines whether a step has been detected. If a step has been detected, the process proceeds to step 112 in which the number n of steps is incremented by 1. The process then proceeds to step 113 to determine whether the number n of steps after being incremented is equal to ⁇ (multiple of four)+1 ⁇ , i.e., whether the number n of steps is the fifth step, the ninth step, the thirteenth step, . . . .
  • step 113 determines whether another step has been detected. If the number n of steps after being incremented is equal to ⁇ (multiple of four)+1 ⁇ , the process proceeds from step 113 to step 114 in which the content material number m is incremented by one.
  • step 115 the content material having the content material number m after being incremented is read and is expanded or shortened by a factor of (Tn ⁇ Dn)/Tm, and then, the playback of the content material is started.
  • step 116 determines whether the playback of the content is to be finished. If the playback of the content is to be finished in response to an instruction to finish playing back the content from the user, the content playback processing (content generation processing) is completed. If the playback of the content is continued, the process returns from step 116 to step 111 to determine whether another step is detected.
  • the subsequent content materials (bars) are sequentially played back with, for example, a playback duration (Tn ⁇ Dn) when the user stops walking.
  • FIG. 7 illustrates a second example of a method for allowing the timing of content to be synchronized with the walking tempo.
  • the example shown in this example is suitable when the walking tempo is considerably changed, and the start timing of each content material is forced to be synchronized with the timing at which the user places his/her foot on the ground without expanding or shortening the content materials.
  • the head of each of the bars A through D is synchronized, every four steps, with the timing at which the user places his/her foot on the ground.
  • the playback of the bar A is finished before the fifth step is detected since the time from the first step to the fifth step is longer than the time Tm, the playback of the bar A is restarted at time t 2 . Then, the playback of the bar A is discontinued at time t 3 when the fifth step is detected, and the playback of the bar B is immediately started.
  • the bar B is also played back until the ninth step is detected without being expanded or shortened. More specifically, if, as shown in FIG. 7 , the ninth step is detected before the playback of the bar B is finished since the time from the fifth step to the ninth step is shorter than the time Tm, the playback of the bar B is discontinued, and the playback of the bar C is immediately started. Conversely, if the playback of the bar B is finished before the ninth step is detected since the time from the fifth step to the ninth step is longer than the time Tm, the playback of the bar B is restarted. Then, at the time when the ninth step is detected, the playback of the bar B is discontinued, and the playback of the bar C is immediately started.
  • FIG. 7 shows an example in which the walking tempo is progressively increased and the second and the subsequent bars are not played back until the end.
  • the continuity of the content materials as a music piece is lost at the spliced portions of the bars.
  • the awkward feeling can be reduced by fading out or fading in the bars before and after the spliced portions.
  • FIG. 8 illustrates content playback processing executed by the CPU 11 of the content playback apparatus 10 shown in FIG. 1 , i.e., content generation processing executed by the content generator 63 shown in FIG. 3 , when the content playback method in the above-described example is employed.
  • n indicates the number of steps and m indicates the content material number.
  • step 122 in which walking sensing is started using the body movement sensor 51 .
  • step 131 determines whether a step has been detected. If a step has been detected, the process proceeds to step 132 in which the number n of steps is incremented by 1. The process then proceeds to step 133 to determine whether the number n of steps after being incremented is equal to ⁇ (multiple of four)+1 ⁇ , i.e., whether the number n of steps is the fifth step, the ninth step, the thirteenth step, . . . .
  • step 133 determines whether another step has been detected. If the number n of steps after being incremented is equal to ⁇ (multiple of four)+1 ⁇ , the process proceeds from step 133 to step 134 in which the content material number m is incremented by one.
  • step 134 After the content material number m is incremented by one in step 134 , the process proceeds to step 135 .
  • step 135 the content material having the content material number m after being incremented is read and the playback of the content material is started.
  • step 136 determines whether the playback of the content is to be finished. If the playback of the content is to be finished in response to an instruction to finish playing back the content from the user, the content playback processing (content generation processing) is completed. If the playback of the content is continued, the process returns from step 136 to step 131 to determine whether another step has been detected.
  • the timing of scene changes or cut changes of moving pictures is synchronized with the timing at which the user shakes or changes from swinging in one direction to swinging in another direction.
  • moving pictures in units of scenes or cuts are used as content materials.
  • the content playback method of this invention can also be used.
  • the frequency (cycle) or the strength (amplitude) of vibrations is changed in accordance with the timing at which the user shakes or changes from swinging in one direction to swinging in another direction.
  • content materials data for generating vibrations having a certain vibration pattern is used.
  • the color (waveform) or lightness (illumination) of light is changed in accordance with the timing at which the user shakes or changes from swinging in one direction to swinging in another direction.
  • content materials data for generating light having a certain color or lightness is used.
  • FIGS. 9 through 15 [3. Body Movement Detection Method: FIGS. 9 through 15 ]
  • Walking sensors 5 L and 5 R are fixed, for example, as shown in FIG. 9 , at the heels of left and right shoes 2 L and 2 R, respectively, worn by the user when running, or at the heels of the left and right feet of the user. Then, after output signals from the walking sensors 5 L and 5 R are amplified by amplifier circuits 55 L and 55 R, respectively, they are supplied to high-pass filters 56 L and 56 R, respectively, where components higher than or equal to a predetermined frequency are extracted. Then, the extracted components are converted into digital data by A/D converters 57 L and 57 R, and are then supplied to the bus 12 of the content playback apparatus 10 shown in FIG. 1 .
  • the walking sensors 5 L and 5 R are formed as strain sensors. More specifically, as shown in FIG. 10 , in the state in which a strain gauge 75 , such as that shown in FIG. 11 , is fixed on a support plate 71 formed of, for example, a phosphor-bronze plate, and terminals 76 and 77 of the strain gauge 75 are supported on the support plate 71 by a reinforcing member 73 , another support plate 72 formed of, for example, a phosphor-bronze plate, is bonded to the support plate 71 .
  • Output signals from the walking sensors 5 L and 5 R during walking are changed, as shown in FIG. 12 , in terms of the AD value (data value output from the A/D converters 57 L and 57 R).
  • the output value becomes greater than a predetermined threshold at the timing at which the user places his/her foot on the ground, and becomes smaller than another threshold at the timing at which the user lifts his/her foot off the ground. Accordingly, the timing at which the user places the left or right foot on the ground can be detected from the output signal of the walking sensor 5 L or 5 R.
  • FIG. 13 illustrates a case where a walking sensor 6 is disposed farther inward than the heel of a shoe 3 .
  • a sensor other than the above-described strain sensor such as an acceleration sensor, a bending sensor, a pressure sensor, a range-finding sensor, a tilt sensor, a magnetic sensor, a current sensor, a charge sensor, electrostatic capacitor sensor, or an electromagnetic induction sensor, may be used. If sound is collected with a microphone, the timing at which the user places his/her foot on the ground can be detected.
  • the invention can be configured as follows.
  • acceleration sensors 7 , 7 a, and 7 b are fixed, as shown in FIGS. 14A, 14B, and 14C , on headphones worn on the head.
  • autocorrelation of sensor signals is calculated.
  • FIG. 14A illustrates a case where the acceleration sensor 7 is fixed on a head band 83 that interconnects left and right speakers 81 and 82 of an overhead headphone.
  • FIG. 14B illustrates a case where the acceleration sensor 7 is fixed on a neck band 86 that interconnects left and right speakers 84 and 85 of a neckband headphone.
  • FIG. 14C illustrates a case where the acceleration sensors 7 a and 7 b are fixed on speakers 87 and 88 , respectively, which are inserted into the left and right ears, of an inner-ear headphone.
  • a wristwatch-type acceleration sensor 8 is fixed on the wrist, as shown in FIG. 15 .
  • the body movement sensor may be fixed on a portion other than a foot, a head, a neck, or a hand, such as a calf, a knee, a thigh, a waist, a trunk, an upper arm, or an elbow, or if it is fixed on a foot, it may be fixed on a portion other than a heel, such as a toe, an instep, or an ankle. If it is fixed on a hand, it may be fixed on a portion other than a wrist, such as a finger or the back of the hand.
  • the body movement sensor may be fixed at an article worn or carried by the user, such as a shoe.
  • user's movements can affect the playback of content so that the entertainment characteristic or uplifting spirits derived from content playback itself can be enhanced.

Abstract

By allowing the movement of a user to affect the playback of content, the entertainment characteristic or uplifting spirits derived from content playback itself can be enhanced. The timing at which the user places his/her foot on the ground while walking or the timing at which the user shakes or changes from swinging in one direction to swinging to another direction is detected. Then, content, such as a music piece or moving pictures, is played back by allowing the timing of beats or bars of the music piece or the timing of scene changes or cut changes of the moving pictures to be synchronized with the timing of the movement of the user.

Description

The present application is a reissue application of U.S. application Ser. No. 11/665,578, now U.S. Pat. No. 8,358,906, issued Jan. 22, 2013, which is based on a National Stage Application of PCT/JP2005/019098 filed on Oct. 18, 2005, which in turn claims priority from Japanese Patent Application JP 2004-302686 filed on Oct. 18, 2004, the entire contents of which are incorporated herein by reference.
TECHNICAL FIELD
The present invention relates to a method and an apparatus for playing back content, such as music or video.
BACKGROUND ART
When playing back music with DVD (Digital Versatile Disc) players, CD (Compact Disc) players, or MD (Mini Disc: registered) players, it is very important to play back music with high quality. The same applies to playing back music compressed using ATRAC (Adaptive Transform Acoustic Coding: registered) or MP3 (MPEG-1 Audio Layer-3) since users eventually listen to time-series signal waveforms transformed from the compressed music.
In contrast, a device that can change content, such as music, in real time in accordance with a current situation of a user is now being considered. For example, the tempo or pitch of music is changed in accordance with a current situation of a user and short sections of music materials are spliced to create one piece of continuous music. From this point of view, the device plays the role of improvising a new piece of music by combining existing pieces of music, which is conventionally conducted by a DJ (Disk Jockey) or VJ (Video Jockey).
When people enjoy music in a concert hall or a discotheque, they stamp, jump, or move their heads back and forth. Swinging in tune with music is not uncommon. Just walking in tune with marching music is pleasant and uplifts your spirits. This proves that synchronizing the rhythm of your physical movements, such as stepping or movement of your head, with the rhythm of content can enhance your spirits or feeling.
According to the invention disclosed in Japanese Unexamined Patent Application Publication No. 2000-300838, in a step game (music direction game), such as “Dance Dance Revolution” (trade name), in which players compete to get a score by stepping in tune with music, play environments that match players' tastes are provided.
According to the invention disclosed in Japanese Unexamined Patent Application Publication No. 2002-65891, the above-described step game is applied to rehabilitation of patients. In this game, the heart rate of a user (patient) is detected and, if the detected heart rate exceeds a reference heart rate, an alarm is issued.
DISCLOSURE OF INVENTION
According to the above-described known content playback method, however, content playback does not react to the movement of a user, but the user acts in accordance with content to be played back, such as stepping in tune with music to be played back. Thus, the entertainment characteristic or uplifting spirits derived from content playback depends on the content itself, and is not sufficient.
This invention has been made to enable the elevation of the entertainment characteristic or uplifting spirits derived from content playback by allowing users' movements to affect the content playback.
According to a content playback method of this invention, a movement of a user is detected, and content is played back by allowing the timing of the content to be synchronized with a feature, a cycle, or a rhythm of the movement.
According to the above-described content playback method, the timing of content to be played back, such as the timing of beats or bars of a music piece to be played back, or the timing of scene changes or cut changes of moving pictures to be played back, is synchronized with the timing at which the user places his/her foot on the ground during walking, or the timing at which the user shakes or changes from swinging in one direction to swinging in other direction. This makes the user who is moving while appreciating or sensing the content feel good, so that the entertainment characteristic and uplifting spirits derived from the content playback itself can be enhanced.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 illustrates an example of a content playback apparatus of this invention.
FIG. 2 illustrates an example of an operation performed by the content playback apparatus shown in FIG. 1.
FIG. 3 illustrates the functional configuration of the content playback apparatus shown in FIG. 1.
FIG. 4 illustrates a case where short music pieces in units of bars are used as content materials.
FIG. 5 illustrates a first example of a music content playback method.
FIG. 6 illustrates an example of processing when the method shown in FIG. 5 is performed.
FIG. 7 illustrates a second example of a music content playback method.
FIG. 8 illustrates an example of processing when the method shown in FIG. 7 is performed.
FIG. 9 illustrates an example of a body movement detector.
FIG. 10 illustrates an example of a walking sensor.
FIG. 11 illustrates an example of a strain gauge.
FIG. 12 illustrates an example of an output signal from a walking sensor.
FIG. 13 illustrates another example in which a walking sensor is disposed.
FIG. 14A illustrates an example when the movement of a head is detected.
FIG. 14B illustrates an example when the movement of a head is detected.
FIG. 14C illustrates an example when the movement of a head is detected.
FIG. 15 illustrates an example when the movement of a hand is detected.
BEST MODE FOR CARRYING OUT THE INVENTION
[1. Embodiment of Content Playback Apparatus: FIGS. 1 through 3]
FIG. 1 illustrates an example of a content playback apparatus of this invention, which is configured as a portable music/video playback device or a cellular telephone terminal.
A content playback apparatus 10 of this example includes a CPU 11. A ROM 13 into which various programs and data are written and a RAM 14 into which programs and data are expanded are connected to a bus 12.
A built-in recording device 15, such as a hard disk, is connected to the bus 12 with an interface 16 therebetween, and a removable recording medium 17, such as a CD or a DVD, is connected to the bus 12 with an interface 18 therebetween. A duplexer antenna 22 is connected to the bus 12 with an RF interface 21, such as an RF duplexer circuit, and an external interface 23 for connecting the content playback apparatus 10 to, for example, the Internet 24, is connected to the bus 12.
The recording device 15 and the recording medium 17 record or play back content, such as music or video, or content materials discussed below. The RF interface 21 and the duplexer antenna 22 wirelessly receive or send content or content materials from or to an external source. The external interface 23 receives or sends content or content materials from or to an external source via the Internet 24.
A video display device 32 including, for example, a liquid crystal device, is connected to the bus 12 with a video processor 31 therebetween, and a sound output device 34 including, for example, a speaker and a headphone, is connected to the bus 12 with a sound processor 33 therebetween. On the display screen of the video display device 32, video, which serves as content, or a setting or operating screen, is displayed. Music, which serves as content, or sound, such as voice announcement, is output from the sound output device 34.
An operation unit 41 including various keys is connected to the bus 12 with an interface 42 therebetween, and a sound input microphone 43 is connected to the bus 12 with a sound processor 44 therebetween.
A body movement sensor 51 is connected to the bus 12 with an A/D converter 52 therebetween. A user's movement, which is discussed below, is detected by the body movement sensor 51 and a corresponding movement detection signal is converted from an analog signal into digital data by the A/D converter 52 and is sent to the bus 12.
The content in this invention includes all subjects that can be appreciated or sensed by users, such as music, sound other than music, moving pictures, still images (including graphics, drawings, and characters), vibrations, and light, such as illumination light. In the example shown in FIG. 1, however, the content is music or video.
According to the content playback method used in the content playback apparatus 10 in the example shown in FIG. 1, in a case where, for example, a user 1 listens to music while walking, the timing at which the user 1 places his/her foot on the ground while running is detected, as shown in FIG. 2, by a walking sensor, which serves as the body movement sensor 51, fixed at a shoe worn by the user 1 or a foot of the user 1. Then, music is played back by allowing the timing of beats or bars (measures) of a piece of music to be synchronized with the timing at which the user 1 places his/her foot on the ground, which is discussed below.
The functional configuration of the content playback apparatus 10 shown in FIG. 1 is shown in FIG. 3. A body movement detector 61, which is formed of the body movement sensor 51 and the A/D converter 52, detects user's movements, for example, the movements of the feet while the user is running or the movements of the head while the user is listening to music.
A body movement analyzer 62, which is formed of the CPU 11, the ROM 13, and the RAM 14, analyzes movement detection data that is sent to the bus 12 from the body movement detector 61 to detect a feature, a cycle, or a rhythm (tempo) of the body movement of the user.
The feature of the body movement includes the timing of a start point (point at which the user starts to move the body from the state in which the user is still), an end point (point at which the user starts to bring the body into a standstill from the state in which the user is moving), a maximal point, a minimal point, a maximum peak point, and a minimum peak point of the body movement, and more specifically, the timing at which the user places his/her foot on the ground during walking, or the timing at which the user shakes or changes from swinging in one direction to swinging in another direction during walking.
In this case, detecting the body movement includes predicting the next movement from the previous movement detection result, such as predicting the next timing at which the user places his/her foot on the ground or changes from swinging in one direction to swinging in another direction.
A content generator 63, which is formed of the CPU 11, the ROM 13, the RAM 14, and the video processor 31 or the sound processor 33, drives a content material reader 64 to read out content materials and accompanying timing information from a content material database 65 on the basis of the above-described body movement detection result to generate content in real time.
The content material reader 64 is formed of the CPU 11, the ROM 13, the RAM 14, the interfaces 16 and 18, the RF interface 21, and the external interface 23. The content material database 65 is a database provided in the recording device 15 or the recording medium 17 on which content material data and timing information are recorded, or a server that sends content material data and timing information.
A content output unit 66 outputs content generated by the content generator 63, and is formed of the video display device 32 or the sound output device 34.
The content materials are materials for generating final content, and may be normal content (music data or video data recorded on CDs or DVDs, music data compressed by MP3, etc.). They are, however, preferably short music pieces in the units of bars (several bars), or short video in units of scenes or cuts.
Timing information indicating timing, such as beat timing or scene change timing, is added as meta information to each piece of content material data, and then, the content material data with the timing information is recorded or sent. Alternatively, timing information may be generated simultaneously with the reading of content materials.
If content materials are short music pieces in the units of bars or short video in the units of scenes or cuts, such content materials can be spliced after deciding the temporal order of the content materials, and if necessary, the playback time duration of the content materials is expanded or shortened to generate one piece of content in real time.
[2. Embodiment of Content Playback Method: FIGS. 4 through 8]
In the content playback method of this invention, content is played back by allowing the playback timing of content to be synchronized with a feature, a cycle, or a rhythm of user's movements on the basis of the above-described timing information.
As discussed above, the feature of the movement includes a start point, an end point, a maximal point, a minimal point, a maximum peak point, or a minimum peak point of the body movement, and more specifically, the timing at which the user places his/her foot on the ground during running or the timing at which the user changes from swinging in one direction to swinging in another direction during running.
(2-1. In the Case of Music Content: FIGS. 4 through 8]
In the case of music content, to play back content, as shown in FIG. 4, one bar of a music piece is used as each of four content materials A, B, C, and D is used, and the content materials A, B, C, and D are spliced to create one piece of music content.
Sections including the content materials A through D may be MIDI or PCM (pulse width modulation) data. If they are MIDI data, beat timing information can be directly obtained, and also, the amount of computation required for expanding or shortening the content materials for generating content, which is discussed below, is small. If they are PCM data, the content materials A through D are pure sound waveforms, and thus, beat timing information is generated beforehand separately from the content materials A through D and is recorded or sent together with the content materials A through D.
Beat timing information may be calculated from the waveforms of content materials immediately before playing back the content materials. As the calculation method, the method disclosed in reference document 1 (as a PDF file, Masataka Goto: An Audio-based Real-time Beat Tracking System for Music With or Without Drum-sounds, Journal of New Music Research, Vol. 30, No. 2, pp. 159-171, June 2001) or reference document 2 (as a book, Masataka Goto: Haku Setsu Ninshiki (Beat/Bar Recognition (Beat Tracking)), Bit Special Number, Konputer to Ongaku no Sekai (World of Computer and Music) . . . Kiso kara Furontia made (From Base to Frontier), pp. 100-116, Kyoritsu Shuppan Co., Ltd. August 1998) can be used.
An example of generating one piece of music content by splicing content materials A, B, C, and D, such as those shown in FIG. 4, is described below. It is now assumed that the content materials A, B, C, and D all consist of one bar and have the same duration Tm.
FIRST EXAMPLE FIGS. 5 and 6
FIG. 5 illustrates a first example of a method for synchronizing the timing of content with the walking tempo. The method in this example is suitable when the walking tempo is constant, and the start timing of each content material is synchronized with the timing at which the user places his/her foot on the ground by expanding or shortening the content materials.
More specifically, in the example shown in FIG. 4, since there is a beat at the head of each of the bars A through D, the head of each of the bars A through D is synchronized, every four steps, with the timing at which the user places his/her foot on the ground. Although the number of steps corresponding to each bar does not have to be four, the tempos of many marching music pieces correspond to 120 beats, i.e., one beat=one step=0.5 seconds, and four steps (four beats) form one bar. Thus, if the number of steps to be synchronized with a bar is four, the head of each of the bars A through D can be synchronized with the timing at which the user places his/her foot on the ground without the need to considerably expand or shorten the content materials.
In this example, in response to an instruction to start playing back content from the user at time t0, the first bar A is played back as it is, i.e., for time Tm.
Then, assuming that the step which is first detected after the content playback start time t0 is the first step, when the fifth step is detected, time T1 for four steps from the first step to the fifth step is calculated.
Then, the time D1 between a playback end time ta of the first bar A and the time at which the fifth step is detected is calculated to determine the difference (T1−D1) between the time T1 and the time D1. The bar B is then expanded or shortened by a factor of (T1−D1)/Tm and is played back so that the difference (T1−D1) becomes equal to the playback duration of the second bar B.
The example shown in FIG. 5 is a case where the playback of the first bar A is finished after the time at which the fifth step is detected, which means D1>0, and the difference (T1−D1) is shorter than the time T1 or the time Tm, which means (T1−D1)/Tm<1, and the bar B is thus shortened and played back.
Conversely, if the playback of the first bar A is finished before the fifth step is detected, the playback of the bar A is restarted immediately, and when the fifth step is detected, the playback of the bar A is discontinued. In this case, D1<0, and the difference (T1−D1) is longer than the time T1 and may be longer than Tm. In this manner, when the difference (T1−D1) is longer than the time Tm, which means (T1−D1)/Tm>1, the bar B is expanded and played back.
Thereafter, similarly, the subsequent bars are sequentially played back so that the difference (Tn−Dn)[n=1, 2, 3, 4 . . . ] becomes equal to the playback duration of the next (n+1)th bar. When the playback of the fourth bar D is finished, the first bar A is resumed and the playback is repeated.
When the walking tempo is not changed, as in the example in FIG. 5, T1=T2=T3=T4 . . . , and Dn=0, except for D1, and thus, the head of each bar can be synchronized, every four steps, with the timing at which the user places his/her foot on the ground.
Additionally, in this example, since bars, which are content materials, are played back by being expanded or shortened, the continuity of the content materials as a music piece can be maintained.
FIG. 6 illustrates content playback processing executed by the CPU 11 of the content playback apparatus 10 shown in FIG. 1, i.e., content generation processing executed by the content generator 63 shown in FIG. 3, when the content playback method in the above-described example is employed.
In this example, the processing is started in response to an instruction to start playing back content from the user, as described above. In step 101, n=0 and m=0 where n indicates the number of steps, different from n shown in FIG. 5, and m indicates the content material number. In the example shown in FIG. 5, for the bar A, m=1, 5, 9 . . . , for the bar B, m=2, 6, 10 . . . , for the bar C, m=3, 7, 11 . . . and for the bar D, m=4, 8, 12 . . . .
Then, the process proceeds to step 102 in which walking sensing is started using the body movement sensor 51. The process then proceeds to step 103 in which m=1, and then proceeds to 104 in which the content material having m=1 is read and the playback of the content material is started.
Then, the process proceeds to step 111 to determine whether a step has been detected. If a step has been detected, the process proceeds to step 112 in which the number n of steps is incremented by 1. The process then proceeds to step 113 to determine whether the number n of steps after being incremented is equal to {(multiple of four)+1}, i.e., whether the number n of steps is the fifth step, the ninth step, the thirteenth step, . . . .
If the number n of steps after being incremented is not equal to {(multiple of four)+1}, the process proceeds from step 113 to step 111 to determine whether another step has been detected. If the number n of steps after being incremented is equal to {(multiple of four)+1}, the process proceeds from step 113 to step 114 in which the content material number m is incremented by one.
After the content material number m is incremented by one in step 114, the process proceeds to step 115. In step 115, the content material having the content material number m after being incremented is read and is expanded or shortened by a factor of (Tn−Dn)/Tm, and then, the playback of the content material is started.
Then, the process proceeds to step 116 to determine whether the playback of the content is to be finished. If the playback of the content is to be finished in response to an instruction to finish playing back the content from the user, the content playback processing (content generation processing) is completed. If the playback of the content is continued, the process returns from step 116 to step 111 to determine whether another step is detected.
Although it is not shown in FIG. 5 or 6, if the user stops walking without giving an instruction to finish playing back the content while a content material (bar) is being played back, the subsequent content materials (bars) are sequentially played back with, for example, a playback duration (Tn−Dn) when the user stops walking.
SECOND EXAMPLE FIGS. 7 and 8
FIG. 7 illustrates a second example of a method for allowing the timing of content to be synchronized with the walking tempo. The example shown in this example is suitable when the walking tempo is considerably changed, and the start timing of each content material is forced to be synchronized with the timing at which the user places his/her foot on the ground without expanding or shortening the content materials.
Specifically, since there is a beat at the head of each of the bars A through D in the example shown in FIG. 4, as in the example shown in FIG. 5, the head of each of the bars A through D is synchronized, every four steps, with the timing at which the user places his/her foot on the ground.
In this example, in response to an instruction to start playing back content from the user at time t0, at time t1 when the first step is detected immediately after the time t0, the playback of the bar A is started and the bar A is played back for time Tm.
If, as shown in FIG. 7, the playback of the bar A is finished before the fifth step is detected since the time from the first step to the fifth step is longer than the time Tm, the playback of the bar A is restarted at time t2. Then, the playback of the bar A is discontinued at time t3 when the fifth step is detected, and the playback of the bar B is immediately started.
Conversely, if the fifth step is detected before the playback of the bar A is finished since the time from the first step to the fifth step is shorter than the time Tm, the playback of the bar A is discontinued and the playback of the bar B is immediately started.
The bar B is also played back until the ninth step is detected without being expanded or shortened. More specifically, if, as shown in FIG. 7, the ninth step is detected before the playback of the bar B is finished since the time from the fifth step to the ninth step is shorter than the time Tm, the playback of the bar B is discontinued, and the playback of the bar C is immediately started. Conversely, if the playback of the bar B is finished before the ninth step is detected since the time from the fifth step to the ninth step is longer than the time Tm, the playback of the bar B is restarted. Then, at the time when the ninth step is detected, the playback of the bar B is discontinued, and the playback of the bar C is immediately started.
Thereafter, similarly, the subsequent bars are sequentially played back without being expanded or shortened. When the playback of the fourth bar D is finished, the first bar A is resumed and the playback is repeated.
FIG. 7 shows an example in which the walking tempo is progressively increased and the second and the subsequent bars are not played back until the end.
In this example, the continuity of the content materials as a music piece is lost at the spliced portions of the bars. However, the awkward feeling can be reduced by fading out or fading in the bars before and after the spliced portions.
FIG. 8 illustrates content playback processing executed by the CPU 11 of the content playback apparatus 10 shown in FIG. 1, i.e., content generation processing executed by the content generator 63 shown in FIG. 3, when the content playback method in the above-described example is employed.
In this example, the processing is started in response to an instruction to start playing back content from the user, as described above. In step 121, n=0 and m=0 where n indicates the number of steps and m indicates the content material number. In the example shown in FIG. 7, for the bar A, m=1, 5, 9 . . . , for the bar B, m=2, 6, 10 . . . , for the bar C, m=3, 7, 11 . . . , and for the bar D, m=4, 8, 12 . . . .
Then, the process proceeds to step 122 in which walking sensing is started using the body movement sensor 51. The process then proceeds to step 123 to determine whether the first step has been detected. If the first step has been detected, the process proceeds to step 124 in which n=1 and m=1, and then proceeds to step 125 in which the content material having m=1 is read and the playback of the content material is started.
Then, the process proceeds to step 131 to determine whether a step has been detected. If a step has been detected, the process proceeds to step 132 in which the number n of steps is incremented by 1. The process then proceeds to step 133 to determine whether the number n of steps after being incremented is equal to {(multiple of four)+1}, i.e., whether the number n of steps is the fifth step, the ninth step, the thirteenth step, . . . .
If the number n of steps after being incremented is not equal to {(multiple of four)+1}, the process proceeds from step 133 to step 131 to determine whether another step has been detected. If the number n of steps after being incremented is equal to {(multiple of four)+1}, the process proceeds from step 133 to step 134 in which the content material number m is incremented by one.
After the content material number m is incremented by one in step 134, the process proceeds to step 135. In step 135, the content material having the content material number m after being incremented is read and the playback of the content material is started.
Then, the process proceeds to step 136 to determine whether the playback of the content is to be finished. If the playback of the content is to be finished in response to an instruction to finish playing back the content from the user, the content playback processing (content generation processing) is completed. If the playback of the content is continued, the process returns from step 136 to step 131 to determine whether another step has been detected.
Although it is not shown in FIG. 7 or 8, if the user stops walking without giving an instruction to finish playing back the content while a content material (bar) is being played back, the subsequent content materials (bars) are sequentially played back with the original time Tm.
(2-2. In the Case of Other Content)
The above-described example has been described in the context of music (music piece) content. For moving picture content or still image content, it is also preferable that moving pictures or still images are played back by allowing the content playback timing to be synchronized with user's movements.
More specifically, in the case of moving pictures, the timing of scene changes or cut changes of moving pictures is synchronized with the timing at which the user shakes or changes from swinging in one direction to swinging in another direction. In this case, as content materials, moving pictures in units of scenes or cuts are used.
In the case of still images, for example, when a plurality of still images are sequentially played back, as in slideshow display, the timing of switching from one still image to another still image is synchronized with the timing at which the user shakes or changes from swinging in one direction to swinging in another direction. In this case, as content materials, still images in units of files are used.
In the case of vibration or light content, the content playback method of this invention can also be used.
In the case of vibration content, the frequency (cycle) or the strength (amplitude) of vibrations is changed in accordance with the timing at which the user shakes or changes from swinging in one direction to swinging in another direction. In this case, as content materials, data for generating vibrations having a certain vibration pattern is used.
In the case of light content, the color (waveform) or lightness (illumination) of light is changed in accordance with the timing at which the user shakes or changes from swinging in one direction to swinging in another direction. In this case, as content materials, data for generating light having a certain color or lightness is used.
[3. Body Movement Detection Method: FIGS. 9 through 15]
If, as in the example stated above, walking is detected as user's movements and the timing at which the user places his/her foot on the ground during walking is detected as the feature of the movements, the following procedure is taken. Walking sensors 5L and 5R are fixed, for example, as shown in FIG. 9, at the heels of left and right shoes 2L and 2R, respectively, worn by the user when running, or at the heels of the left and right feet of the user. Then, after output signals from the walking sensors 5L and 5R are amplified by amplifier circuits 55L and 55R, respectively, they are supplied to high- pass filters 56L and 56R, respectively, where components higher than or equal to a predetermined frequency are extracted. Then, the extracted components are converted into digital data by A/ D converters 57L and 57R, and are then supplied to the bus 12 of the content playback apparatus 10 shown in FIG. 1.
The walking sensors 5L and 5R are formed as strain sensors. More specifically, as shown in FIG. 10, in the state in which a strain gauge 75, such as that shown in FIG. 11, is fixed on a support plate 71 formed of, for example, a phosphor-bronze plate, and terminals 76 and 77 of the strain gauge 75 are supported on the support plate 71 by a reinforcing member 73, another support plate 72 formed of, for example, a phosphor-bronze plate, is bonded to the support plate 71.
Output signals from the walking sensors 5L and 5R during walking are changed, as shown in FIG. 12, in terms of the AD value (data value output from the A/ D converters 57L and 57R). The output value becomes greater than a predetermined threshold at the timing at which the user places his/her foot on the ground, and becomes smaller than another threshold at the timing at which the user lifts his/her foot off the ground. Accordingly, the timing at which the user places the left or right foot on the ground can be detected from the output signal of the walking sensor 5L or 5R.
FIG. 13 illustrates a case where a walking sensor 6 is disposed farther inward than the heel of a shoe 3. In this case, as the walking sensor 6, a sensor other than the above-described strain sensor, such as an acceleration sensor, a bending sensor, a pressure sensor, a range-finding sensor, a tilt sensor, a magnetic sensor, a current sensor, a charge sensor, electrostatic capacitor sensor, or an electromagnetic induction sensor, may be used. If sound is collected with a microphone, the timing at which the user places his/her foot on the ground can be detected.
The above-described example has been discussed when the timing at which the user places his/her foot on the ground is detected through the detection of walking. If the timing at which a head or a hand shakes or changes from swinging in one direction to swinging in another direction is detected through the detection of the movements of the head or the hand, the invention can be configured as follows.
If, for example, the timing at which the head changes from swinging in one direction to swinging in another direction is detected through the detection of the movements of the head, acceleration sensors 7, 7a, and 7b are fixed, as shown in FIGS. 14A, 14B, and 14C, on headphones worn on the head. To detect walking with an acceleration sensor, autocorrelation of sensor signals is calculated.
FIG. 14A illustrates a case where the acceleration sensor 7 is fixed on a head band 83 that interconnects left and right speakers 81 and 82 of an overhead headphone. FIG. 14B illustrates a case where the acceleration sensor 7 is fixed on a neck band 86 that interconnects left and right speakers 84 and 85 of a neckband headphone. FIG. 14C illustrates a case where the acceleration sensors 7a and 7b are fixed on speakers 87 and 88, respectively, which are inserted into the left and right ears, of an inner-ear headphone.
If the timing at which a hand changes from swinging in one direction to swinging in another direction is detected through the detection of the movements of the hand, a wristwatch-type acceleration sensor 8 is fixed on the wrist, as shown in FIG. 15.
The body movement sensor may be fixed on a portion other than a foot, a head, a neck, or a hand, such as a calf, a knee, a thigh, a waist, a trunk, an upper arm, or an elbow, or if it is fixed on a foot, it may be fixed on a portion other than a heel, such as a toe, an instep, or an ankle. If it is fixed on a hand, it may be fixed on a portion other than a wrist, such as a finger or the back of the hand. The body movement sensor may be fixed at an article worn or carried by the user, such as a shoe.
INDUSTRIAL APPLICABILITY
As described above, according to the present invention, user's movements can affect the playback of content so that the entertainment characteristic or uplifting spirits derived from content playback itself can be enhanced.

Claims (19)

The invention claimed is:
1. A content playback method comprising:
detecting movements of a user;
driving a content material reader to read out content from a content materials database, wherein the content is divided into a plurality of sections according to a timing of the content;
generating, from the sections of the content, output-content for being played back in synchronization with a feature, a cycle, or a rhythm of the movements, wherein the output-content comprises portions of sections spliced together such that start timings of sections in the output-content are synchronized with a feature, a cycle, or a rhythm of the movements; and
playing back the output-content such that beginnings of sections in the output-content are played back in synchronization with a feature, a cycle, or a rhythm of the movements,
wherein the output-content is generated in real time as the movements of the user are detected such that synchronization of start timings of the sections in the output-content with the movements is maintained when a user movement rate changes.
2. The content playback method according to claim 1, wherein the feature is a start point, an end point, a maximal point, a minimal point, a maximum peak point, or a minimal peak point of the movement.
3. The content playback method according to claim 1, wherein the feature is a timing at which the user places his/her foot on the ground while walking.
4. The content playback method according to claim 1, wherein the feature is a timing at which the user shakes or changes from swinging in one direction to swinging in another direction.
5. The content playback method according to claim 1, wherein the content is a music piece and the timing is a timing of a beat or a bar of the music piece.
6. The content playback method according to claim 1, wherein the content is moving pictures and the timing is a timing of scene changes or cut changes of the moving pictures.
7. The content playback method according to claim 1, wherein the content is still images and the timing is a timing of switching the still images.
8. The content playback method according to claim 1, wherein information indicating the timing is recorded on a recording medium in association with the content.
9. The content playback method according to claim 1, wherein the detecting a movement includes predicting a next movement from the previous movement detect results.
10. The content playback method according to claim 1, wherein the output-content is generated so as to be played back in synchronization with a feature, a cycle, or a rhythm of the movements by:
detecting a time difference Δt between a timing tc of an ending of a given section Sn played back as output-content and a timing tm of a given movement of a user, where Δt=tc−tm; and
expanding or shortening the duration Tn+1 of a next section Sn+1 in the output-content by a factor of (T*−Δt)/Tn, where T* is a duration between the given movement and a prior movement, and Tn is a duration of the given section Sn.
11. The content playback method according to claim 1, wherein the output-content is generated so as to be played back in synchronization with a feature, a cycle, or a rhythm of the movements by:
selecting, at a time when a given section Sn is being played back as output-content, between (i) generating subsequent output-content by repeating a portion of the given section as output-content and (ii) generating subsequent output-content by terminating the playback of the given section Sn early and outputting a next section Sn+1 as output-content, based on a timing of a detected movement of the user, such that a beginning of the next section Sn+1 in the output-content is synchronized with the detected movement of the user.
12. A content playback apparatus comprising:
detection means for detecting movements of a user;
a content material reader for reading out content from a content materials database, wherein the content is divided into a plurality of sections according to a timing of the content;
a content generator for generating, from the sections of content read by the content material reader, output-content for being played back in synchronization with a feature, a cycle, or a rhythm of the movements, wherein the output-content comprises portions of sections spliced together such that start timings of sections in the output-content are synchronized with a feature, a cycle, or a rhythm of the movements; and
playback means for playing back the output-content such that beginnings of sections in the output-content are played back in synchronization with a feature, a cycle, or a rhythm of the movements,
wherein the content generator is configured to generate output-content in real time as the movements of the user are detected such that synchronization of start timings of the sections in the output-content with the movements is maintained when a user movement rate changes.
13. The content playback apparatus according to claim 12, wherein the detection means is fixed at part of the body of the user or at an article worn or carried by the user.
14. The content playback apparatus according to claim 12, wherein the content is a music piece and the timing of the content comprises a beat or a bar of the music piece.
15. The content playback apparatus according to claim 12, wherein the content is moving pictures and the timing of the content comprises scene changes or cut changes of the moving pictures.
16. The content playback apparatus according to claim 12, wherein the content is still images and the timing of the content comprises a timing of switching the still images.
17. The content playback method according to claim 12, wherein the detecting a movement includes predicting a next movement from the previous movement detect results.
18. The content playback method apparatus according to claim 12, wherein the content generator is configured to generate the output-content so as to be played back in synchronization with a feature, a cycle, or a rhythm of the movements by:
detecting a time difference Δt between a timing tc of an ending of a given section Sn played back as output-content and a timing tm of a given movement of a user, where Δt=tc−tm; and
expanding or shortening the duration Tn+1 of a next section Sn+1 in the output-content by a factor of (T*−ΔΔt)/Tn, where T* is a duration between the given movement and a prior movement, and Tn is a duration of the given section Sn.
19. The content playback method apparatus according to claim 12, wherein the content generator is configured to generate the output-content so as to be played back in synchronization with a feature, a cycle, or a rhythm of the movements by:
selecting, at a time when a given section Sn is being played back as output-content, between (i) generating subsequent output-content by repeating a portion of the given section Sn as output-content and (ii) generating subsequent output-content by terminating the playback of the given section Sn early and outputting a next section Sn+1 as output-content, based on a timing of a detected movement of the user, such that a beginning of the next section Sn+1 in the output-content is synchronized with the detected movement of the user.
US14/601,569 2004-10-18 2005-10-18 Content playback method and content playback apparatus Expired - Fee Related USRE47948E1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/601,569 USRE47948E1 (en) 2004-10-18 2005-10-18 Content playback method and content playback apparatus

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
JP2004-302686 2004-10-18
JP2004302686A JP2006114174A (en) 2004-10-18 2004-10-18 Content reproducing method and content reproducing device
US14/601,569 USRE47948E1 (en) 2004-10-18 2005-10-18 Content playback method and content playback apparatus
PCT/JP2005/019098 WO2006043536A1 (en) 2004-10-18 2005-10-18 Content reproducing method and content reproducing device
US11/665,578 US8358906B2 (en) 2004-10-18 2005-10-18 Content playback method and content playback apparatus

Publications (1)

Publication Number Publication Date
USRE47948E1 true USRE47948E1 (en) 2020-04-14

Family

ID=36202951

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/665,578 Ceased US8358906B2 (en) 2004-10-18 2005-10-18 Content playback method and content playback apparatus
US14/601,569 Expired - Fee Related USRE47948E1 (en) 2004-10-18 2005-10-18 Content playback method and content playback apparatus

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/665,578 Ceased US8358906B2 (en) 2004-10-18 2005-10-18 Content playback method and content playback apparatus

Country Status (8)

Country Link
US (2) US8358906B2 (en)
EP (1) EP1804235B1 (en)
JP (1) JP2006114174A (en)
KR (1) KR20070068372A (en)
CN (1) CN101057273B (en)
BR (1) BRPI0516943A (en)
RU (1) RU2398291C2 (en)
WO (1) WO2006043536A1 (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102005049485B4 (en) * 2005-10-13 2007-10-18 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Control playback of audio information
JP4548424B2 (en) 2007-01-09 2010-09-22 ヤマハ株式会社 Musical sound processing apparatus and program
JP5107853B2 (en) * 2008-10-02 2012-12-26 株式会社コナミデジタルエンタテインメント GAME DEVICE, GAME DEVICE CONTROL METHOD, AND PROGRAM
US9859988B2 (en) 2011-03-22 2018-01-02 Advanced Electroacoustics Private Limited Communications apparatus
CN102592485B (en) * 2011-12-26 2014-04-30 中国科学院软件研究所 Method for controlling notes to be played by changing movement directions
US9236039B2 (en) * 2013-03-04 2016-01-12 Empire Technology Development Llc Virtual instrument playing scheme
US9595932B2 (en) 2013-03-05 2017-03-14 Nike, Inc. Adaptive music playback system
JP2015132695A (en) * 2014-01-10 2015-07-23 ヤマハ株式会社 Performance information transmission method, and performance information transmission system
JP6326822B2 (en) 2014-01-14 2018-05-23 ヤマハ株式会社 Recording method
US20150379118A1 (en) * 2014-06-27 2015-12-31 United Video Properties, Inc. Methods and systems for generating playlists based on activities being performed by a user
GB2539875B (en) 2015-06-22 2017-09-20 Time Machine Capital Ltd Music Context System, Audio Track Structure and method of Real-Time Synchronization of Musical Content
CN104954917A (en) * 2015-06-25 2015-09-30 苏州凯枫瑞电子科技有限公司 Light-sensation power off type headphones with automatic volume adjustment function
EP4180079A1 (en) * 2016-04-14 2023-05-17 Medrhythms, Inc. Systems for neurologic rehabilitation
CN106531186B (en) * 2016-10-28 2019-07-12 中国科学院计算技术研究所 Merge the step detection method of acceleration and audio-frequency information
CN106653058B (en) * 2016-10-28 2020-03-17 中国科学院计算技术研究所 Dual-track-based step detection method
GB2557970B (en) 2016-12-20 2020-12-09 Mashtraxx Ltd Content tracking system and method

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4525074A (en) 1983-08-19 1985-06-25 Citizen Watch Co., Ltd. Apparatus for measuring the quantity of physical exercise
WO1993022762A1 (en) 1992-04-24 1993-11-11 The Walt Disney Company Apparatus and method for tracking movement to generate a control signal
US5583776A (en) 1995-03-16 1996-12-10 Point Research Corporation Dead reckoning navigational system using accelerometer to measure foot impacts
JPH09281963A (en) 1996-04-17 1997-10-31 Casio Comput Co Ltd Musical tone controller
WO2000017850A1 (en) 1998-09-24 2000-03-30 Medal Sarl Automatic music generating method and device
JP2000300838A (en) 1999-02-16 2000-10-31 Konami Co Ltd Game system and computer readable memory medium
JP2001195059A (en) 2000-01-11 2001-07-19 Yamaha Corp Musical performance interface
JP2001195060A (en) 2000-01-11 2001-07-19 Yamaha Corp Musical performance interface
JP2001224690A (en) 1999-12-08 2001-08-21 Takumi Kitazawa Sound sleeping device and recording medium formed by using the same
US20010015123A1 (en) * 2000-01-11 2001-08-23 Yoshiki Nishitani Apparatus and method for detecting performer's motion to interactively control performance of music or the like
JP2001350474A (en) 2000-06-08 2001-12-21 Yamaha Corp Time-series data read control device, performance control device, and video reproduction control device
JP2002065891A (en) 2000-08-31 2002-03-05 Daiichikosho Co Ltd Rehabilitation support device
JP2002268635A (en) 2001-03-06 2002-09-20 Mitsubishi Chemicals Corp Recording and reproducing device for optical information recording medium, optical information recording medium, and reproducing method for optical information recording medium
JP2003085888A (en) 2001-09-07 2003-03-20 Sony Corp Music reproducing device and control method for the same
JP2003111106A (en) 2001-09-28 2003-04-11 Toshiba Corp Apparatus for acquiring degree of concentration and apparatus and system utilizing degree of concentration
US20040000225A1 (en) * 2002-06-28 2004-01-01 Yoshiki Nishitani Music apparatus with motion picture responsive to body action
JP2004227638A (en) 2003-01-21 2004-08-12 Sony Corp Data recording medium, data recording method and apparatus, data reproducing method and apparatus, and data transmitting method and apparatus
JP2004228778A (en) 2003-01-21 2004-08-12 Sony Corp Method and apparatus with respect to recording, transmission or reproduction of data
WO2004072767A2 (en) 2003-02-12 2004-08-26 Koninklijke Philips Electronics N.V. Audio reproduction apparatus, method, computer program
EP1533784A2 (en) 2003-11-20 2005-05-25 Sony Corporation Playback mode control device and method
EP1585134A1 (en) 2004-04-05 2005-10-12 Sony Corporation Contents reproduction apparatus and method thereof

Patent Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4525074A (en) 1983-08-19 1985-06-25 Citizen Watch Co., Ltd. Apparatus for measuring the quantity of physical exercise
WO1993022762A1 (en) 1992-04-24 1993-11-11 The Walt Disney Company Apparatus and method for tracking movement to generate a control signal
US5583776A (en) 1995-03-16 1996-12-10 Point Research Corporation Dead reckoning navigational system using accelerometer to measure foot impacts
JPH09281963A (en) 1996-04-17 1997-10-31 Casio Comput Co Ltd Musical tone controller
JP2002525688A (en) 1998-09-24 2002-08-13 メダル ソシエテ ア レスポンサビリテ リミテ Automatic music generation apparatus and method
WO2000017850A1 (en) 1998-09-24 2000-03-30 Medal Sarl Automatic music generating method and device
US6506969B1 (en) 1998-09-24 2003-01-14 Medal Sarl Automatic music generating method and device
JP2000300838A (en) 1999-02-16 2000-10-31 Konami Co Ltd Game system and computer readable memory medium
JP2001224690A (en) 1999-12-08 2001-08-21 Takumi Kitazawa Sound sleeping device and recording medium formed by using the same
JP2001195059A (en) 2000-01-11 2001-07-19 Yamaha Corp Musical performance interface
US20010015123A1 (en) * 2000-01-11 2001-08-23 Yoshiki Nishitani Apparatus and method for detecting performer's motion to interactively control performance of music or the like
JP2001195060A (en) 2000-01-11 2001-07-19 Yamaha Corp Musical performance interface
JP2001350474A (en) 2000-06-08 2001-12-21 Yamaha Corp Time-series data read control device, performance control device, and video reproduction control device
JP2002065891A (en) 2000-08-31 2002-03-05 Daiichikosho Co Ltd Rehabilitation support device
JP2002268635A (en) 2001-03-06 2002-09-20 Mitsubishi Chemicals Corp Recording and reproducing device for optical information recording medium, optical information recording medium, and reproducing method for optical information recording medium
JP2003085888A (en) 2001-09-07 2003-03-20 Sony Corp Music reproducing device and control method for the same
JP2003111106A (en) 2001-09-28 2003-04-11 Toshiba Corp Apparatus for acquiring degree of concentration and apparatus and system utilizing degree of concentration
US20040000225A1 (en) * 2002-06-28 2004-01-01 Yoshiki Nishitani Music apparatus with motion picture responsive to body action
JP2004037575A (en) 2002-06-28 2004-02-05 Yamaha Corp Performance processor, performance processing program and file generation system
JP2004227638A (en) 2003-01-21 2004-08-12 Sony Corp Data recording medium, data recording method and apparatus, data reproducing method and apparatus, and data transmitting method and apparatus
JP2004228778A (en) 2003-01-21 2004-08-12 Sony Corp Method and apparatus with respect to recording, transmission or reproduction of data
WO2004072767A2 (en) 2003-02-12 2004-08-26 Koninklijke Philips Electronics N.V. Audio reproduction apparatus, method, computer program
EP1533784A2 (en) 2003-11-20 2005-05-25 Sony Corporation Playback mode control device and method
JP2005156641A (en) 2003-11-20 2005-06-16 Sony Corp Playback mode control device and method
EP1585134A1 (en) 2004-04-05 2005-10-12 Sony Corporation Contents reproduction apparatus and method thereof

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
Computer & Music, Chapter 3; pp. 100-116.
EEE 100 The authoritative Dictionary of IEEE Standards Terms, 2000, pages related to "data" and "play back.". *
Extended European Search Report dated Jun. 8, 2009 for corresponding European Application No. 05 79 5503.
Japanese Office Action dated Jun. 10, 2009 for corresponding Japanese Application No. 2004-302686.
Japanese Office Action dated Mar. 11, 2009 for corresponding Japanese Application No. 2004-302686.
M. Goto, "An Audio-based Real-time Beat Tracking System for Music With or Without Drum-sounds", Journal of New Music Research, vol. 30, No. 2, pp. 159-171, May 2001.
Russian Office Action dated Jul. 27, 2009 for corresponding Russian Application No. 20071146585/28 (015823).

Also Published As

Publication number Publication date
US8358906B2 (en) 2013-01-22
RU2007114585A (en) 2008-10-27
WO2006043536A1 (en) 2006-04-27
JP2006114174A (en) 2006-04-27
CN101057273B (en) 2011-04-20
US20090003802A1 (en) 2009-01-01
EP1804235B1 (en) 2013-12-04
RU2398291C2 (en) 2010-08-27
EP1804235A1 (en) 2007-07-04
BRPI0516943A (en) 2008-09-23
CN101057273A (en) 2007-10-17
EP1804235A4 (en) 2009-07-08
KR20070068372A (en) 2007-06-29

Similar Documents

Publication Publication Date Title
USRE47948E1 (en) Content playback method and content playback apparatus
JP4595555B2 (en) Content playback apparatus and content playback method
US7737353B2 (en) Apparatus for controlling music reproduction and apparatus for reproducing music
JP4403415B2 (en) Content reproduction method and content reproduction apparatus
JP4052274B2 (en) Information presentation device
US9079058B2 (en) Motion coordination operation device and method, program, and motion coordination reproduction system
JP2005156641A (en) Playback mode control device and method
JP2007193907A (en) Music reproduction controller and music reproducing device
JP2008242286A (en) Performance device and program for attaining its control method
JP4311467B2 (en) Performance apparatus and program for realizing the control method
JP5381293B2 (en) Sound emission control device
US10354630B2 (en) Performance information processing device and method
JP2006301276A (en) Portable music reproducing device
JP7175120B2 (en) Singing aid for music therapy
JP2008242285A (en) Performance device and program for attaining its control method
JP2008299631A (en) Content retrieval device, content retrieval method and content retrieval program
JP5549110B2 (en) Signal control device
JP2003228963A (en) Recording medium, device and method for data recording, and device and method for data editing
JP2003228387A (en) Operation controller
JP2007157254A (en) Contents reproduction device, retrieval server, and contents selection and reproduction method
JP2008242288A (en) Performance device and program for attaining its control method
JP2007160065A (en) Game machine
JP2007156261A (en) Sound reproducing device, sound reproducing method, and sound reproducing program

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY