WO1994022128A1 - Sound-to-light graphics system - Google Patents

Sound-to-light graphics system Download PDF

Info

Publication number
WO1994022128A1
WO1994022128A1 PCT/US1994/003181 US9403181W WO9422128A1 WO 1994022128 A1 WO1994022128 A1 WO 1994022128A1 US 9403181 W US9403181 W US 9403181W WO 9422128 A1 WO9422128 A1 WO 9422128A1
Authority
WO
WIPO (PCT)
Prior art keywords
audio signal
video
processing device
video image
optical effect
Prior art date
Application number
PCT/US1994/003181
Other languages
French (fr)
Other versions
WO1994022128A9 (en
Inventor
Alex Blok
Original Assignee
Alex Blok
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alex Blok filed Critical Alex Blok
Priority to AU69415/94A priority Critical patent/AU6941594A/en
Publication of WO1994022128A1 publication Critical patent/WO1994022128A1/en
Publication of WO1994022128A9 publication Critical patent/WO1994022128A9/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Definitions

  • This invention relates to an electronic system for producing dynamic optical effects in response to an audio input. More particularly, it relates to a system for producing complex real ⁇ time optical effects to accompany musical compositions.
  • U.S. patent 5,048,390 granted to Adachi et al. for a Tone Visualizing Apparatus, takes this concept a step farther.
  • the system described includes an image memory so that actual images displayed on the CRT screen can be effected by an audio input.
  • the size or the color of an image or the choice of which image is displayed from the image memory can be affected by different parameters of the audio input signal.
  • the different parameters which can affect the display are measured by an enveloped detecting circuit, a chord detecting circuit, a Fast Fourier Transform Circuit and a zero crossing detection circuit in different embodiments of the invention. It is clear that the trend is toward more and more sophisticated optical effects using sound- to-light technology.
  • the present invention seeks to take this technology much farther by providing a sound-to-light graphics system that produces optical effects having tremendous visual impact, in response to an audio input.
  • the object of the present invention is to provide a sound-to-light graphics system capable of producing optical effects with superior visual impact in response to an audio input signal.
  • An important aspect of this objective is to make the optical effects highly responsive to the beat of the music in a way that has not been accomplished by prior art systems.
  • the multiple layers can come from images created in response to different audio inputs, or they can be created in response to different parameters of the same audio signal, or one or more audio-responsive graphic layers can be combined with images from a video source, such as computer generated graphics, recorded video images or real-time video input from a video camera.
  • the transparency of the different graphic layers can be adjusted to achieve different visual effects.
  • Another objective of the invention is to be able to simulate motion of objects and patterns on the graphics display screen, and to make that motion responsive to the audio input.
  • the motion can be simulated by proper application of the color palette cycling technique just described or a number of bit map transformations can be applied to the screen image to make the images move.
  • the bit map transformations can be triggered by external commands or by the parameters of the audio input.
  • Still another objective of the present invention is to increase the visual impact of sound-to- light graphics by creating three dimensional effects on a graphics screen. For instance, three dimensional polygons or other objects can be created, scaled and animated in response to audio input.
  • Yet another object of the invention is to provide a sound-to-light graphics system which is capable of creating complex graphic effects synchronized in real time to an audio input by "looking ahead" at the audio signal so that the complexity of the graphic effects is not limited by the computer time required to create the graphic images.
  • Another reason for using the "looking ahead” feature is to create a landscape representative of the music that the listener is about to hear.
  • a number of techniques are described for performing the "looking ahead” process.
  • the present invention takes the form of a sound-to-light graphics system in which a source of audio signals is directed to an audio signal processor whose output is connected to one or more graphics processing devices.
  • the graphics processing device also receives video input from a video memory or frame store and control input from a user interface device.
  • the system may also receive video input from an external video source which can be mixed with the internal video signal using well known genlock circuitry to synchronize the video images.
  • the graphics processing device transforms the video inputs in response to the input from the audio signal processor and from the user interface device and sends the output to a display controller which causes the transformed graphic images to be displayed on one or more graphic display devices.
  • Figure 1 shows an embodiment of the sound-to-light graphics system using multiple graphics processing devices.
  • Figure 2 shows an embodiment of the sound-to-light graphics system using a single integrated graphics processing device.
  • Figures 3 show schematic representations of how the "looking ahead" feature of the sound- to-light graphics system is implemented.
  • Figure 4 shows a representation of the multilayer graphics capability of the sound-to-light graphics system.
  • Figures 5 (A-E) show an example of the palette cycling technique used to create optical effects in response to the audio input.
  • Figures 6 (A-E) show various examples of the bitmap transformation techniques used to create optical effects in response to the audio input.
  • FIG. 1 A first embodiment of the sound-to- light graphics system of the present invention is shown in Figure 1.
  • two or more graphics processing devices are linked in series to achieve full multilayer graphics capabilities without sacrificing any processing speed.
  • the system comprises a man/machine interface 1 which enables operator commands to be input into the system, a control processor 2, and audio signal processor 3 which is preferably a spectrum analyzer, digital mass storage 4 such as Winchester discs and/or erasable optical discs, a serial data network 6, a parallel data bus 7, a multiway video switch 8, a digitizer 1, graphics processing devices 10J 1 a first video buffer 12 and a second video buffer 13.
  • the graphics processing devices 10, 11 are essentially conventional in their operation and are implemented by combining a microcomputer and genlock circuitry.
  • the genlock circuitry facilitates synchronization between video signals from separate sources.
  • Software run on the graphics processing devices 10, 11 carries out various tasks, including redefining the computer's color pallet, cycling of the computer's color pallet, animation of bitmap images, plotting coordinates of three dimensional facets and objects and animation of graphic image sequences stored in the computer's random access memory (RAM).
  • the graphics processing devices 10J1 are connected to a serial data network 6 along which control data and audio data are transmitted for the control processor 2.
  • the software running on the graphics processing devices 10J 1 responds to control data signals from the serial data network 6, by putting into operation one of the above mentioned tasks.
  • the graphics processing devices 10,11 respond to the audio data signals by means of software interrupts to modify the operation of the above mentioned tasks.
  • the graphics processing devices 10J 1 are also connected by a parallel data bus 7 to each other, the control processor 2 and to the digital mass storage devices 4.
  • a library of images may be stored on the digital mass storage devices 4 for later retrieval and use by the graphics processing devices.
  • the image sequences stored in the mass storage devices 4 or in the graphics processing devices, RAM may be derived from the digitizer 9.
  • the digitizer takes a composite video signal 14 from the video switch 8 and digitizes sequences of frames into a monochrome frame store in 16 gray levels. This frame store is then accessed by the first graphics processing device 10 over a high speed data link 15 and the image data is in turn stored in the graphics processing device's own RAM for use in the production of real time optical effects.
  • the digitizer 9 can be incorporated into the hardware of first graphics device 10. This eliminates the need for the high speed data link 15 between the digitizer 9 and the first graphics processing device 10.
  • the composite video input signals 14, are selectively directed by the video switch 8, under the control of the control processor 2, either to the digitizer 9 or input 16 of the first graphics processing device 10.
  • the composite video input signals 14 may be derived from any suitable source of signals, e.g. VTR, video cameras, etc.
  • the input video signals may then be combined with graphics, generated by the graphics processing device 10 itself under the influence of the input audio signals, to produce a variety of effects.
  • the signals thus produced are transmitted along link 17 to the second graphics processing device 11 where additional graphics may be added.
  • the new signal may then be passed on to further graphics processing machines for analogous processing. In this way, multistage processing of a video signal may be performed without incurring the time penalty that would result if only one graphics processing device was required to carry out all the processing.
  • the completed signal is then output along link 18 to the video buffer 12 and thence to the display or displays via output 19.
  • An additional RGB output 22 is provided on the first graphics processing device 10. This output is passed to the buffer 13 and then displayed locally in the vicinity of the control processor 2 for diagnostic purposes via output
  • the control processor 2 includes an audio signal processor 3, which may be a spectrum analyzer giving digital outputs and accepts operator commands for the man/machine interface 1.
  • the man/machine interface may conveniently be any combination of display and selection means, though a keyboard and an operator controlled pointing device, such as a mouse or trackball are preferred.
  • the operator instructions are processed by the control machine 2 and the result transmitted along the serial data network 6 to the graphics processing devices 10J1.
  • the audio signal processor 3 continuously analyzes the input audio signal 5 and then transmits representative data along the serial data network 6 to the graphics processing devices 10,11.
  • control processor and the graphics processing devices were implemented using Acorn Archimedes microcomputers. However, it will be understood that other microcomputers, minicomputers or even hardwired circuits may be used to carry out the present invention.
  • the output of the audio source is fed to an analog to digital (A/D) converter which converts the analog audio input to a digital signal
  • A/D analog to digital
  • the digital signal is fed into the digital signal processor. If the audio input is already in digital form such as the input from a Musical Instrument Digital Interface or MIDI, it can be used directly without passing it through the A/D converter.
  • Other sources of already digitized audio input that can be used directly include compact disks (CD) and digital audio tapes (DAT).
  • the digital audio signal from the DSP is analyzed using a Fast Fourier Transform (FFT) which breaks the signal down into its constituent frequencies for use by the graphics processing device.
  • FFT Fast Fourier Transform
  • the graphics processing device transforms video images which are retrieved from the video memory under the influence of the analyzed audio signal as well as control instructions from the user interface device.
  • the output of the graphics processing device is fed into a display controller which causes the transformed graphic images to be displayed on one or more graphic display devices.
  • the output of the graphics processing device can be combined with an external video input from a source such as a video camera or a video tape player, using genlock technology to synchronize the two images.
  • the graphic display devices can be conventional color CRT displays, flat screen displays, video projection devices or any compatible graphic display devices or any combination thereof.
  • the output of the graphics processing device can also be directed to a video recording device such as a
  • FIG. 3A and 3B shows how the graphics processing capabilities of the sound-to-light graphics system can be increased by using the "looking ahead" feature of the system.
  • the complexity of the optical effects that can be created is limited by the graphics processing speed of the host computer.
  • the present invention seeks to overcome these limitations by giving the system the capability of looking ahead at the prerecorded audio signal and starting the graphics processing far enough ahead of time so that the graphics processing speed is not a limiting factor. The optical effects created can then be synchronized with the audio signal as it is actually played.
  • a "look ahead" circuit for use with prerecorded audio input is shown in figure 3A.
  • a special disk reader can be supplied with two laser pickups. The first laser pickup reads the music into the DSP of the sound-to-light system which initiates the creation of the graphic effects in response to the audio signal. After a predetermined delay, the second laser pickup plays the music in synchrony with the appropriate graphic effects. For music that is prerecorded on analog or digital magnetic tapes, the music can be played back on a tape player with two magnetic heads.
  • the audio signal from the first magnetic head will be read into the DSP of the system (or into the A/D converter if analog tapes are used) to initiate the creation of the video graphics.
  • the second magnetic head plays the music in proper synchronization with the video graphics created.
  • the audio input is a stream of digital data, such as from a MIDI device, the "looking ahead" function can be accomplished with a first in, first out type of digital audio buffer, as shown in figure 3 A.
  • the data When the data first enters the buffer, it initiates the graphics processing device to create the appropriate video graphics.
  • the audio signal is played on an appropriate audio system in synchronization with the video effects created to accompany it.
  • the system appears to be creating the optical effects in real time with the music as it is being played.
  • the head start given to the graphics processing device by the "looking ahead" feature allows the sound-to-light system to create very complex optical effects that would not otherwise be possible because of the limitations of the computing time needed.
  • Prerecorded video images can be entered into the video memory and recalled later by the video processing device for display on the graphic display device.
  • the images can be entered into the video memory by bitmapping the image on a computer screen or the images can be entered from recorded video information on tape or disks or images can be created by more conventional artistic techniques and then digitized by computer.
  • the image to be displayed is selected from the video memory by a control signal from the user interface device or it can be selected by the video processing device based on the time variant spectral content of the audio signal.
  • a new image can be displayed with each beat of the music or the image can be changed with each beat by one or more image transformations performed on the image by the video processing device in response to the audio input signal.
  • the image can be made to change or move or otherwise transform to the rhythm of the music.
  • a series of prerecorded images can be entered into the video memory and recalled later for animating video sequences in response to the audio input.
  • the series of images is selected from the video memory by a control signal from the user interface device or it can be selected by the video processing device based on the time variant spectral content of the audio signal.
  • the first image of the series is displayed on the graphic display device until the video processing device detects a beat, then the video processing device sequences to the next image in the series.
  • the video processing device continues to sequence the images when a beat is detected so that the displayed video image is animated to the beat of the music.
  • the system can repeat the series of images over and over or a new series of images can be displayed after the first series has been completed.
  • Variations of this technique can made, such as displaying a series of images with each beat for a more life-like animation of the screen images.
  • An example of this would be to prerecord a series of images showing various dance steps in the video memory, then using the video processing device to animate the dance steps in time to the beat of the music.
  • Another visual effect can be achieved by making the number of images displayed from a prerecorded sequence proportional to the amplitude of one of the musical parameters, such as the bass beat. In the example given above, this would cause the animated dancer to take a big step when a heavy bass beat is detected and to take smaller steps when only a light bass beat is detected.
  • This concept can be used with other visual effects as well by making the magnitude of the effect proportional to the amplitude of various parameters of the audio input.
  • Images can be defined based on the spectral content of the audio input as determined by the FFT analysis. For instance, a series of objects can be defined which are reflective of different frequency ranges within the audio spectrum. An example is given in the table below:
  • the objects in the image displayed can based on the predominant frequency of the musical piece or multiple objects can be displayed simultaneously based on the entire spectral content of the audio signal.
  • the quantity and/or the size of the objects displayed can be based on the amplitude of the corresponding frequency range.
  • the image can be triggered to change at each beat of the music so that the image keeps time to the beat or a different beat triggered optical effect such as color palette cycling or bitmap transformations can be superimposed on the image to make the objects change with the beat.
  • Three dimensional video effects such as 3-D fractals or 3-D polygons with light source shading can be produced by the system or recalled from the video memory in response to the audio input.
  • the 3-D images can be made in response to the spectral content of the audio input. They can also be made to move, change size, rotate and change color or form in response to the music.
  • Another pleasing visual effect that is built into the system is to make the 3-D objects change form by gradually transforming from one shape to another.
  • the 3-D objects can be triggered to transform or "morph" in response to the beat of the music or changes in tone of the music.
  • the color palette applied to the images displayed by the video processing device can be user assigned with control signals from the user interface device, or the color palette can be defined by the spectral content of the audio input.
  • the color palette applied to an image can be assigned based on the predominant frequency of the audio input at any point in time according to a look up table such as the one below:
  • the colors can also be made to change randomly, triggered by the beat of the music, or they can change through a predetermined sequence of colors.
  • color palette cycling When the colors are triggered to change through a predetermined sequence of colors, this is known as color palette cycling. If an object on the video screen is one solid color, then when the color palette is cycled it will merely appear to change color. However, with certain patterns color palette cycling can be used to simulate motion onscreen without actually redrawing the patterns.
  • Figure 5A shows an example of a pattern which can effectively use color palette cycling to simulate motion.
  • This pattern represents a tunnel of rectangles.
  • each concentric rectangle will be illuminated in sequence depending on the intensity of the signal, starting from the center of the pattern.
  • the color of the illuminated pieces will depend on the color palette chosen.
  • a colored ring appears to race outward toward the edge of the screen even though the pattern is not actually moving. If each of the concentric rectangles are continually sequenced through the entire range of colors defined by the color palette, then it will appear that there are continual waves of colored rings racing toward the outside of the screen. If the direction of the color palette cycling is reversed the rings will appear to race inward from the edge to the center of the screen.
  • the direction of the color palette cycling can be changed or the color palette can be redefined by a control command from the user interface device or in response to another parameter of the audio signal.
  • the color palette cycling can be triggered to change direction at each beat of the music.
  • color palette cycling can be used to create some very complex and visually pleasing optical effects.
  • Bitmap transformations are well known in the field of computer graphics so a detailed technical explanation will not be necessary. What is not known in the prior art is to use bitmap transformations in a sound-to-light graphics system to make optical effects that respond to an audio input, particularly for making patterns that respond to the beat of a musical performance.
  • a number of the possible bitmap transformations that can be carried out in response to the audio input are illustrated in figures 6A through 6F.
  • Figure 6A shows a simple graphic pattern. The pattern can be made to slide or scroll in any direction on the screen using a bitmap transformation.
  • Figure 6C shows the pattern replicated using a bitmap transformation.
  • Figure 6D shows the pattern replicated and reflected.
  • Figure 6E shows the same pattern zoomed in using a bitmap transformation.
  • images can be zoomed out using a bitmap transformation.
  • Figure 6F shows the pattern rotated by a bitmap transformation.
  • Other bitmap transformations can be used to explode or implode the video image. Any or all of the bitmap transformations can be combined to create more complex optical effects.
  • figure 6B shows the pattern of 6A after it has been simultaneously duplicated, reflected and slid using combined bitmap transformations. These transformations can also be made to react to the bass beat or other parameters of the audio input.
  • the top layer In order to view multiple graphic layers on a single display screen at least the top layer must be made "transparent" so that the layers underneath will be visible.
  • the transparency of all the graphic layers can be controlled by the system operator from the user interface device. All or some of the video image of a given layer can be made transparent. Forcing a graphic layer, that is making it opaque, will hide all other layers, therefore allowing all bit planes to be used for the forced layer.
  • any and all of the optical effects described, as well as many other graphic effects, can be combined by the sound-to-light graphics system of the present invention to create graphic images with superior visual impact in response to an audio input.
  • the system can be used to create a visual ambience in nightclubs and discos that seems to come alive and move with the music.
  • the visual impact can be increased by projecting the images onto "video walls" that will surround the patrons with moving sound-to-light images.
  • the system can also be used for creating music videos that reflect the mood and the tempo of a musical performance without the time and expense of complicated production and editing. Because the system can also be controlled by external controls or by a software program, it can also be used for creating multilayer dynamic ambient lighting effects in the absence of an audio input.
  • the sound-to-light graphics capabilities of the present invention could also be incorporated into an electronic video game in which the game parameters are effected by an audio input.
  • One possible embodiment is a three-dimensional video game in which a three-dimensional landscape changes in response to an external audio input. The speed and difficulty of the game may change and other objects in the game may appear, disappear or change shape or size in response to the audio input.
  • An important advantage of a video game incorporating the present invention is that by varying the game parameters according to an audio input, a video game can be made which offers an almost infinite variety of game situations that change in response to changes in the audio input.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A sound-to-light graphics system in which a source of audio signals is directed to an audio signal processor (3) whose output is connected to one or more graphics processing devices (10, 11). The graphics processing device (10, 11) also receives video input (14) from a video memory or frame store and control input from a user interface device. Optionally the system may also receive video input from an external video source which can be mixed with the internal video signal using well known genlock circuitry to synchronize the video images. The graphics processing device (10, 11) transforms the video inputs (14) in response to the input from the audio signal processor (3) and from the user interface device (1) and sends the output to a display controller which causes the transformed graphic images to be displayed on one or more graphic display devices.

Description

SOUND-TO-LIGHT GRAPHICS SYSTEM
FIELD OF THE INVENTION
This invention relates to an electronic system for producing dynamic optical effects in response to an audio input. More particularly, it relates to a system for producing complex real¬ time optical effects to accompany musical compositions.
BACKGROUND OF THE INVENTION It is currently the practice in discos, nightclubs and the like to accompany musical performances with lighting effects. The most common method of producing these effects is to energize selectively incandescent bulbs in a manner determined by the music being performed, usually referred to as sound-to-light systems. The bulbs are arranged in one or two dimensional arrays and under the control of microprocessors or other electronic circuits are capable of producing a range of pleasing effects. However, these systems suffer from a lack of flexibility, relocation of the light arrays being a time consuming and strenuous task. Another disadvantage is that they appear primitive to the modern eye which has become accustomed to increasingly sophisticated visual effects, used everywhere from "pop videos" to television news programs.
An alternative which can increase the sophistication of the visual effects is to project "pop videos" or other graphics onto a screen. This , however, also has several shortcomings. For instance, when pop videos are shown, the operator becomes liable for additional royalty payments. Other projected displays suffer from the fact that there is no relationship between the tempo of the performed music and the speed of the action in the display, reducing the level of visual impact. Another approach has been to create a video image on a CRT screen that responds to an audio input. U.S. patent 3,900,886, granted to Coyle and Stevens for a Sonic Color System, describes a device which attaches to a color television to produce color images in response to an audio input from a source such as a record player, tape player or radio. The system varies the colors displayed on the television screen in response to the amplitude or frequency of the audio input signal.
U.S. patent 5,048,390, granted to Adachi et al. for a Tone Visualizing Apparatus, takes this concept a step farther. The system described includes an image memory so that actual images displayed on the CRT screen can be effected by an audio input. The size or the color of an image or the choice of which image is displayed from the image memory can be affected by different parameters of the audio input signal. The different parameters which can affect the display are measured by an enveloped detecting circuit, a chord detecting circuit, a Fast Fourier Transform Circuit and a zero crossing detection circuit in different embodiments of the invention. It is clear that the trend is toward more and more sophisticated optical effects using sound- to-light technology. The present invention seeks to take this technology much farther by providing a sound-to-light graphics system that produces optical effects having tremendous visual impact, in response to an audio input.
SUMMARY OF THE INVENTION
In keeping with the foregoing discussion, the object of the present invention is to provide a sound-to-light graphics system capable of producing optical effects with superior visual impact in response to an audio input signal. An important aspect of this objective is to make the optical effects highly responsive to the beat of the music in a way that has not been accomplished by prior art systems. It is also an object of the invention to provide a sound-to-light graphics system that displays multiple layer graphics for increased visual impact. The multiple layers can come from images created in response to different audio inputs, or they can be created in response to different parameters of the same audio signal, or one or more audio-responsive graphic layers can be combined with images from a video source, such as computer generated graphics, recorded video images or real-time video input from a video camera. The transparency of the different graphic layers can be adjusted to achieve different visual effects.
It is a further objective of the invention to make one or more of the graphic layers on screen change colors in response to an audio input by cycling through a color palette. It is also an objective to make the system capable of redefining the color palette by external control or in response to one or more parameters of the audio input. Another objective is to make the system capable of changing the direction of the color palette cycling in response to one or more parameters of the audio input.
Another objective of the invention is to be able to simulate motion of objects and patterns on the graphics display screen, and to make that motion responsive to the audio input. The motion can be simulated by proper application of the color palette cycling technique just described or a number of bit map transformations can be applied to the screen image to make the images move. The bit map transformations can be triggered by external commands or by the parameters of the audio input.
Still another objective of the present invention is to increase the visual impact of sound-to- light graphics by creating three dimensional effects on a graphics screen. For instance, three dimensional polygons or other objects can be created, scaled and animated in response to audio input.
Yet another object of the invention is to provide a sound-to-light graphics system which is capable of creating complex graphic effects synchronized in real time to an audio input by "looking ahead" at the audio signal so that the complexity of the graphic effects is not limited by the computer time required to create the graphic images. Another reason for using the "looking ahead" feature is to create a landscape representative of the music that the listener is about to hear. A number of techniques are described for performing the "looking ahead" process. To accomplish these objectives the present invention takes the form of a sound-to-light graphics system in which a source of audio signals is directed to an audio signal processor whose output is connected to one or more graphics processing devices. The graphics processing device also receives video input from a video memory or frame store and control input from a user interface device. Optionally the system may also receive video input from an external video source which can be mixed with the internal video signal using well known genlock circuitry to synchronize the video images. The graphics processing device transforms the video inputs in response to the input from the audio signal processor and from the user interface device and sends the output to a display controller which causes the transformed graphic images to be displayed on one or more graphic display devices. Other features of the sound-to-light graphics system of the present invention, as well as additional objects and advantages of the system, will become apparent to those skilled in the art upon reading and understanding the following detailed description along with the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 shows an embodiment of the sound-to-light graphics system using multiple graphics processing devices.
Figure 2 shows an embodiment of the sound-to-light graphics system using a single integrated graphics processing device.
Figures 3 (A-B) show schematic representations of how the "looking ahead" feature of the sound- to-light graphics system is implemented.
Figure 4 shows a representation of the multilayer graphics capability of the sound-to-light graphics system.
Figures 5 (A-E) show an example of the palette cycling technique used to create optical effects in response to the audio input. Figures 6 (A-E) show various examples of the bitmap transformation techniques used to create optical effects in response to the audio input.
DETAILED DESCRIPTION OF THE INVENTION
A first embodiment of the sound-to- light graphics system of the present invention is shown in Figure 1. In this embodiment, two or more graphics processing devices are linked in series to achieve full multilayer graphics capabilities without sacrificing any processing speed.
Referring to Figure 1, the system comprises a man/machine interface 1 which enables operator commands to be input into the system, a control processor 2, and audio signal processor 3 which is preferably a spectrum analyzer, digital mass storage 4 such as Winchester discs and/or erasable optical discs, a serial data network 6, a parallel data bus 7, a multiway video switch 8, a digitizer 1, graphics processing devices 10J 1 a first video buffer 12 and a second video buffer 13. The graphics processing devices 10, 11 are essentially conventional in their operation and are implemented by combining a microcomputer and genlock circuitry. The genlock circuitry facilitates synchronization between video signals from separate sources. Software run on the graphics processing devices 10, 11 carries out various tasks, including redefining the computer's color pallet, cycling of the computer's color pallet, animation of bitmap images, plotting coordinates of three dimensional facets and objects and animation of graphic image sequences stored in the computer's random access memory (RAM). The graphics processing devices 10J1 are connected to a serial data network 6 along which control data and audio data are transmitted for the control processor 2. The software running on the graphics processing devices 10J 1 responds to control data signals from the serial data network 6, by putting into operation one of the above mentioned tasks. The graphics processing devices 10,11 respond to the audio data signals by means of software interrupts to modify the operation of the above mentioned tasks. For instance, the speed of cycling of the computers pallet or the playback speed and direction of an image sequence, stored in RAM, could be varied. The graphics processing devices 10J 1 are also connected by a parallel data bus 7 to each other, the control processor 2 and to the digital mass storage devices 4. A library of images may be stored on the digital mass storage devices 4 for later retrieval and use by the graphics processing devices.
The image sequences stored in the mass storage devices 4 or in the graphics processing devices, RAM may be derived from the digitizer 9. The digitizer takes a composite video signal 14 from the video switch 8 and digitizes sequences of frames into a monochrome frame store in 16 gray levels. This frame store is then accessed by the first graphics processing device 10 over a high speed data link 15 and the image data is in turn stored in the graphics processing device's own RAM for use in the production of real time optical effects. In practice, the digitizer 9 can be incorporated into the hardware of first graphics device 10. This eliminates the need for the high speed data link 15 between the digitizer 9 and the first graphics processing device 10.
The composite video input signals 14, are selectively directed by the video switch 8, under the control of the control processor 2, either to the digitizer 9 or input 16 of the first graphics processing device 10. The composite video input signals 14 may be derived from any suitable source of signals, e.g. VTR, video cameras, etc. The input video signals may then be combined with graphics, generated by the graphics processing device 10 itself under the influence of the input audio signals, to produce a variety of effects. The signals thus produced are transmitted along link 17 to the second graphics processing device 11 where additional graphics may be added. The new signal may then be passed on to further graphics processing machines for analogous processing. In this way, multistage processing of a video signal may be performed without incurring the time penalty that would result if only one graphics processing device was required to carry out all the processing. The completed signal is then output along link 18 to the video buffer 12 and thence to the display or displays via output 19. An additional RGB output 22 is provided on the first graphics processing device 10. This output is passed to the buffer 13 and then displayed locally in the vicinity of the control processor 2 for diagnostic purposes via output
21.
The control processor 2, as previously stated, includes an audio signal processor 3, which may be a spectrum analyzer giving digital outputs and accepts operator commands for the man/machine interface 1. The man/machine interface may conveniently be any combination of display and selection means, though a keyboard and an operator controlled pointing device, such as a mouse or trackball are preferred. The operator instructions are processed by the control machine 2 and the result transmitted along the serial data network 6 to the graphics processing devices 10J1. Meanwhile, the audio signal processor 3 continuously analyzes the input audio signal 5 and then transmits representative data along the serial data network 6 to the graphics processing devices 10,11.
In the embodiment described, the control processor and the graphics processing devices were implemented using Acorn Archimedes microcomputers. However, it will be understood that other microcomputers, minicomputers or even hardwired circuits may be used to carry out the present invention.
Recent advances in hardware and software for the system have allowed the sound-to-light graphics system to achieve nearly the same graphics processing performance with a single integrated graphics processing device. This embodiment of the system is shown in Figure 2. An important part of the advance is the availability of a microcomputer with a built in digital signal processor or DSP chip. The currently preferred hardware for this embodiment is an Atari
Falcon030 microcomputer which has a Motorola 68030 microprocessor and a built in 32 MHz Motorola 56001 Digital Signal Processor and a built in MIDI interface, although the current invention can be implemented on any equivalent electronics system whether it is microcomputer based or specially made for the purpose. Referring to Figure 2. The output of the audio source is fed to an analog to digital (A/D) converter which converts the analog audio input to a digital signal The digital signal is fed into the digital signal processor. If the audio input is already in digital form such as the input from a Musical Instrument Digital Interface or MIDI, it can be used directly without passing it through the A/D converter. Other sources of already digitized audio input that can be used directly include compact disks (CD) and digital audio tapes (DAT). The digital audio signal from the DSP is analyzed using a Fast Fourier Transform (FFT) which breaks the signal down into its constituent frequencies for use by the graphics processing device. The graphics processing device transforms video images which are retrieved from the video memory under the influence of the analyzed audio signal as well as control instructions from the user interface device. The output of the graphics processing device is fed into a display controller which causes the transformed graphic images to be displayed on one or more graphic display devices. If desired, the output of the graphics processing device can be combined with an external video input from a source such as a video camera or a video tape player, using genlock technology to synchronize the two images. The graphic display devices can be conventional color CRT displays, flat screen displays, video projection devices or any compatible graphic display devices or any combination thereof. The output of the graphics processing device can also be directed to a video recording device such as a
VTR for later playback. This option will be particularly useful for the production of music videos using the sound-to-light graphics system. Figures 3A and 3B shows how the graphics processing capabilities of the sound-to-light graphics system can be increased by using the "looking ahead" feature of the system. In prior art sound-to-light systems, the complexity of the optical effects that can be created is limited by the graphics processing speed of the host computer. The present invention seeks to overcome these limitations by giving the system the capability of looking ahead at the prerecorded audio signal and starting the graphics processing far enough ahead of time so that the graphics processing speed is not a limiting factor. The optical effects created can then be synchronized with the audio signal as it is actually played.
There are several possible ways to implement this "looking ahead" feature. A "look ahead" circuit for use with prerecorded audio input is shown in figure 3A. For music that is prerecorded on a compact optical disk, a special disk reader can be supplied with two laser pickups. The first laser pickup reads the music into the DSP of the sound-to-light system which initiates the creation of the graphic effects in response to the audio signal. After a predetermined delay, the second laser pickup plays the music in synchrony with the appropriate graphic effects. For music that is prerecorded on analog or digital magnetic tapes, the music can be played back on a tape player with two magnetic heads. The audio signal from the first magnetic head will be read into the DSP of the system (or into the A/D converter if analog tapes are used) to initiate the creation of the video graphics. After a predetermined delay, the second magnetic head plays the music in proper synchronization with the video graphics created. If the audio input is a stream of digital data, such as from a MIDI device, the "looking ahead" function can be accomplished with a first in, first out type of digital audio buffer, as shown in figure 3 A. When the data first enters the buffer, it initiates the graphics processing device to create the appropriate video graphics. When the data exits the buffer the audio signal is played on an appropriate audio system in synchronization with the video effects created to accompany it.
To the listener, the system appears to be creating the optical effects in real time with the music as it is being played. However, the head start given to the graphics processing device by the "looking ahead" feature allows the sound-to-light system to create very complex optical effects that would not otherwise be possible because of the limitations of the computing time needed.
OPERATIONAL DESCRIPTION
No matter which hardware platform is used for the sound-to-light graphics system, the operation of the system is essentially the same. When the incoming audio signal is subjected to the Fast Fourier Transform analysis, the signal is broken down into its constituent frequencies. This information about the time variant spectral content of the audio signal is used to determine many of parameters of the optical effects created. The amplitude fluctuation of the bass frequencies is used to determine the beat of the music. This is an important feature of the present sound-to-light system because it allows the optical effects created to be synchronized with the beat of the music. Prior art sound-to-light video graphics systems concentrated mostly on matching the color of the video images to the mood of the music being played. For accompanying dance music that will be played in discos and nightclubs, it is even more important to match the video effects to the tempo of the music and to synchronize them with the musical beat. When a beat is detected by a peak in the bass frequencies one of several possible optical effects is initiated. On each subsequent beat which is detected a change in the optical effect is initiated so that the video images keep time with the musical beat. The choice of which optical effect is to be initiated is affected by the control signal from the user interface and the system software as well as other parameters of the audio input. Some of the possible optical effects that can be executed by the system are described below. DISPLAYING A PRERECORDED VIDEO IMAGE Prerecorded video images can be entered into the video memory and recalled later by the video processing device for display on the graphic display device. The images can be entered into the video memory by bitmapping the image on a computer screen or the images can be entered from recorded video information on tape or disks or images can be created by more conventional artistic techniques and then digitized by computer. The image to be displayed is selected from the video memory by a control signal from the user interface device or it can be selected by the video processing device based on the time variant spectral content of the audio signal. A new image can be displayed with each beat of the music or the image can be changed with each beat by one or more image transformations performed on the image by the video processing device in response to the audio input signal. The image can be made to change or move or otherwise transform to the rhythm of the music. ANIMATING A SERIES OF PRERECORDED VIDEO IMAGES
A series of prerecorded images can be entered into the video memory and recalled later for animating video sequences in response to the audio input. The series of images is selected from the video memory by a control signal from the user interface device or it can be selected by the video processing device based on the time variant spectral content of the audio signal. The first image of the series is displayed on the graphic display device until the video processing device detects a beat, then the video processing device sequences to the next image in the series. The video processing device continues to sequence the images when a beat is detected so that the displayed video image is animated to the beat of the music. The system can repeat the series of images over and over or a new series of images can be displayed after the first series has been completed. Variations of this technique can made, such as displaying a series of images with each beat for a more life-like animation of the screen images. An example of this would be to prerecord a series of images showing various dance steps in the video memory, then using the video processing device to animate the dance steps in time to the beat of the music. Another visual effect can be achieved by making the number of images displayed from a prerecorded sequence proportional to the amplitude of one of the musical parameters, such as the bass beat. In the example given above, this would cause the animated dancer to take a big step when a heavy bass beat is detected and to take smaller steps when only a light bass beat is detected. This concept can be used with other visual effects as well by making the magnitude of the effect proportional to the amplitude of various parameters of the audio input.
DISPLAYING A VIDEO IMAGE BASED ON THE SPECTRAL CONTENT OF
THE AUDIO SIGNAL
Images can be defined based on the spectral content of the audio input as determined by the FFT analysis. For instance, a series of objects can be defined which are reflective of different frequency ranges within the audio spectrum. An example is given in the table below:
Frequency Shape
7. Highest Twinkles/Lightning
6. Lines 5. Triangles
4. Middle Rectangles
3. Diamonds
2. Polygons
1. Lowest Concentric Circles The objects in the image displayed can based on the predominant frequency of the musical piece or multiple objects can be displayed simultaneously based on the entire spectral content of the audio signal. The quantity and/or the size of the objects displayed can be based on the amplitude of the corresponding frequency range. The image can be triggered to change at each beat of the music so that the image keeps time to the beat or a different beat triggered optical effect such as color palette cycling or bitmap transformations can be superimposed on the image to make the objects change with the beat.
DISPLAYING THREE DIMENSIONAL VIDEO EFFECTS
Three dimensional video effects, such as 3-D fractals or 3-D polygons with light source shading can be produced by the system or recalled from the video memory in response to the audio input. Like the 2-D images described above, the 3-D images can be made in response to the spectral content of the audio input. They can also be made to move, change size, rotate and change color or form in response to the music. Another pleasing visual effect that is built into the system is to make the 3-D objects change form by gradually transforming from one shape to another. The 3-D objects can be triggered to transform or "morph" in response to the beat of the music or changes in tone of the music. Other types of 3-D effects that can be achieved are tunnels that may be round or polygonal in shape, and 3-D landscapes that scroll toward the viewer as if he or she were flying over a terrain in a low flying air craft. Different parameters of the music will create the hills and valleys in the 3-D landscape. DEFINING A COLOR PALETTE
The color palette applied to the images displayed by the video processing device can be user assigned with control signals from the user interface device, or the color palette can be defined by the spectral content of the audio input. For example, the color palette applied to an image can be assigned based on the predominant frequency of the audio input at any point in time according to a look up table such as the one below:
Frequency Primary Color Fade
7. Highest White
6. Yellow 5. Cyan
4. Middle Green
3. Magenta
2. Red
1. Lowest Blue The colors can also be made to change randomly, triggered by the beat of the music, or they can change through a predetermined sequence of colors.
COLOR PALETTE CYCLING
When the colors are triggered to change through a predetermined sequence of colors, this is known as color palette cycling. If an object on the video screen is one solid color, then when the color palette is cycled it will merely appear to change color. However, with certain patterns color palette cycling can be used to simulate motion onscreen without actually redrawing the patterns.
Figure 5A shows an example of a pattern which can effectively use color palette cycling to simulate motion. This pattern represents a tunnel of rectangles. Using the bass component of the audio signal, each concentric rectangle will be illuminated in sequence depending on the intensity of the signal, starting from the center of the pattern. The color of the illuminated pieces will depend on the color palette chosen. As each sequential rectangle is illuminated, a colored ring appears to race outward toward the edge of the screen even though the pattern is not actually moving. If each of the concentric rectangles are continually sequenced through the entire range of colors defined by the color palette, then it will appear that there are continual waves of colored rings racing toward the outside of the screen. If the direction of the color palette cycling is reversed the rings will appear to race inward from the edge to the center of the screen. At any particular time, the direction of the color palette cycling can be changed or the color palette can be redefined by a control command from the user interface device or in response to another parameter of the audio signal. For instance, the color palette cycling can be triggered to change direction at each beat of the music. By proper design of the patterns, color palette cycling can be used to create some very complex and visually pleasing optical effects. BITMAP TRANSFORMATIONS
Bitmap transformations are well known in the field of computer graphics so a detailed technical explanation will not be necessary. What is not known in the prior art is to use bitmap transformations in a sound-to-light graphics system to make optical effects that respond to an audio input, particularly for making patterns that respond to the beat of a musical performance. A number of the possible bitmap transformations that can be carried out in response to the audio input are illustrated in figures 6A through 6F. Figure 6A shows a simple graphic pattern. The pattern can be made to slide or scroll in any direction on the screen using a bitmap transformation.
The direction of the sliding can be made to change or bounce in time to the beat of the music. Figure 6C shows the pattern replicated using a bitmap transformation. Figure 6D shows the pattern replicated and reflected. Figure 6E shows the same pattern zoomed in using a bitmap transformation. Similarly, images can be zoomed out using a bitmap transformation. Figure 6F shows the pattern rotated by a bitmap transformation. Other bitmap transformations can be used to explode or implode the video image. Any or all of the bitmap transformations can be combined to create more complex optical effects. For example, figure 6B shows the pattern of 6A after it has been simultaneously duplicated, reflected and slid using combined bitmap transformations. These transformations can also be made to react to the bass beat or other parameters of the audio input. DISPLAYING MULTIPLE GRAPHIC LAYERS The ability to display multiple graphic layers on a graphic display screen is an important part of the present invention. None of the known prior art sound- to-light systems exhibit this capability. By layering the graphics, any of the optical effects that have been discussed can be combined for even greater visual impact. The layering can be used to create an impression of depth or to create additional images from the mixing of the individual layered images. Figure 4 shows a representation of the multilayer graphics capability of the sound-to-light graphics system. The furthest back layer will be known as layer A, the next layer is layer B, etcetera, for as many layers as the capabilities of the system hardware will allow. Each time an extra layer is added, the number of bits available per layer will be reduced. The hardware platform described in figure 2 supports 8 bit color, so two video layers can be effectively displayed and still provide realistic color reproduction. In addition, a video layer can also be added to the graphic display. CHANGINGTHETRANSPARENCYOFTHEGRAPHICLAYERS
In order to view multiple graphic layers on a single display screen at least the top layer must be made "transparent" so that the layers underneath will be visible. The transparency of all the graphic layers can be controlled by the system operator from the user interface device. All or some of the video image of a given layer can be made transparent. Forcing a graphic layer, that is making it opaque, will hide all other layers, therefore allowing all bit planes to be used for the forced layer.
REVERSING THE DISPLAY POSITION OF THE GRAPHIC LAYERS Another effect that can be achieved is to reverse the graphic layers on the display screen, bringing the background to the front and moving the foreground to the back. Like all of the other effects, this effect can controlled through the user interface or it can be made to trigger on the base beat or on another parameter of the audio input.
CONCLUSION
Any and all of the optical effects described, as well as many other graphic effects, can be combined by the sound-to-light graphics system of the present invention to create graphic images with superior visual impact in response to an audio input. The system can be used to create a visual ambiance in nightclubs and discos that seems to come alive and move with the music. The visual impact can be increased by projecting the images onto "video walls" that will surround the patrons with moving sound-to-light images. The system can also be used for creating music videos that reflect the mood and the tempo of a musical performance without the time and expense of complicated production and editing. Because the system can also be controlled by external controls or by a software program, it can also be used for creating multilayer dynamic ambient lighting effects in the absence of an audio input. One application of this is to use large flat panel graphic displays to create "3-D wallpaper" with moving, multilayer graphic images. The sound-to-light graphics capabilities of the present invention could also be incorporated into an electronic video game in which the game parameters are effected by an audio input. One possible embodiment is a three-dimensional video game in which a three-dimensional landscape changes in response to an external audio input. The speed and difficulty of the game may change and other objects in the game may appear, disappear or change shape or size in response to the audio input. An important advantage of a video game incorporating the present invention is that by varying the game parameters according to an audio input, a video game can be made which offers an almost infinite variety of game situations that change in response to changes in the audio input. Although the examples given include many specificities, they are intended as illustrative of only some of the possible embodiments of the invention. Other embodiments and modifications will, no doubt, occur to those skilled in the art. Thus, the examples given should only be interpreted as illustrations of some of the preferred embodiments of the invention, and the full scope of the invention should be determined by the appended claims and their legal equivalents.

Claims

I claim:
1. A method for creating sound-to- light effects, comprising the steps of: a) providing an electronic system comprising an audio signal processor linked to a video processing device which is linked to a graphic display device, b) inputting an audio signal into said audio signal processor, c) analyzing said audio signal with said audio signal processor to determine the time variant spectral content of said audio signal, d) inputting said time variant spectral content into said video processing device, e) said video processing device detecting a beat in said audio signal, f) said video processing device initiating an optical effect upon detection of said beat, g) displaying said optical effect on said graphic display device, h) said video processing device detecting a subsequent beat in said audio signal, i) said video processing device initiating a change in said optical effect upon detection of said subsequent beat, j) displaying said change in said optical effect on said graphic display device.
2. The method of claim 1 wherein said beat in said audio signal is detected as an amplitude peak in a bass frequency component of said spectral content of said audio signal and said subsequent beat in said audio signal is detected as a subsequent amplitude peak in a bass frequency component of said spectral content of said audio signal.
3. The method of claim 1 wherein said optical effect is chosen from the set of optical effects consisting of displaying a first video image, defining a color palette for a video image, cycling a color palette for a video image, and performing a bit map transformation on a video image.
4. The method of claim 1 wherein said optical effect is to define a color palette for a video image, said color palette being based on the instantaneous spectral content of said audio signal.
5. The method of claim 1 wherein said change in said optical effect is chosen from the set of optical effects consisting of displaying a second video image, redefining a color palette for a video image, cycling a color palette for a video image, changing the direction of cycling of a color palette for a video image, and performing a bit map transformation on a video image.
6. The method of claim 1 wherein said change in said optical effect is to define a color palette for a video image, said color palette being based on the instantaneous spectral content of said audio signal.
7. A method for creating sound-to-light effects, comprising the steps of: a) providing an electronic system comprising an audio signal processor linked to a video processing device which is linked to a graphic display device, said video processing device being linked to a video memory device, b) recording at least one video image in said video memory device, c) selecting a video image from the contents of said video memory device and inputting said video image into said video processing device, d) displaying said video image on said graphic display device, e) inputting an audio signal into said audio signal processor, f) analyzing said audio signal with said audio signal processor to determine the time variant spectral content of said audio signal, g) inputting said time variant spectral content into said video processing device, h) said video processing device detecting a beat in said audio signal, i) said video processing device initiating an optical effect upon detection of said beat, j) displaying said optical effect on said graphic display device, k) said video processing device detecting a subsequent beat in said audio signal, 1) said video processing device initiating a change in said optical effect upon detection of said subsequent beat, m) displaying said change in said optical effect on said graphic display device.
8. A method for creating sound-to-light effects, comprising the steps of: a) providing an electronic system comprising an audio signal processor linked to a video processing device which is linked to a graphic display device, said video processing device being linked to a video memory device, b) recording at least two video images in said video memory device, c) selecting a first video image from the contents of said video memory device and inputting said first video image into said video processing device, d) displaying said first video image as a first graphic layer on said graphic display device, e) selecting a second video image from the contents of said video memory device and inputting said second video image into said video processing device, f) displaying said second video image as a second graphic layer on said graphic display device, g) inputting an audio signal into said audio signal processor, h) analyzing said audio signal with said audio signal processor to determine the time variant spectral content of said audio signal, i) inputting said time variant spectral content into said video processing device, j) said video processing device detecting a beat in said audio signal, k) said video processing device initiating an optical effect in at least one of said first and second graphic layers upon detection of said beat,
1) displaying said optical effect on said graphic display device, m) said video processing device detecting a subsequent beat in said audio signal, n) said video processing device initiating a change in said optical effect in said at least one of said first and second graphic layers upon detection of said subsequent beat, o) displaying said change in said optical effect on said graphic display device.
9. The method of claim 8 wherein said optical effect is chosen from the set of optical effects consisting of displaying a video image, defining a color palette for a video image, cycling a color palette for a video image, changing the transparency level of at least one of said first and second graphic layers, reversing the positions front-to-back of how said first and second graphic layers are displayed on said graphic display device, and performing a bit map transformation on a video image.
10. The method of claim 8 wherein said optical effect is to define a color palette for a video image, said color palette being based on the instantaneous spectral content of said audio signal.
11. The method of claim 8 wherein said change in said optical effect is chosen from the set of optical effects consisting of displaying a video image, redefining a color palette for a video image, cycling a color palette for a video image, changing the direction of cycling of a color palette for a video image, changing the transparency level of at least one of said first and second graphic layers, reversing the positions front-to-back of how said first and second graphic layers are displayed on said graphic display device, and performing a bit map transformation on a video image.
12. The method of claim 8 wherein said change in said optical effect is to define a color palette for a video image, said color palette being based on the instantaneous spectral content of said audio signal.
13. A method for creating sound-to-light effects, comprising the steps of: a) providing an electronic system comprising an audio signal processor linked to a video processing device which is linked to a graphic display device, and a means for delaying an audio signal which is linked to an audio reproduction device, b) inputting an audio signal into said audio signal processor, and simultaneously inputting said audio signal into said means for delaying an audio signal to produce a delayed audio signal, c) analyzing said audio signal with said audio signal processor to determine the time variant spectral content of said audio signal, d) inputting said time variant spectral content into said video processing device, e) said video processing device initiating an optical effect based on said time variant spectral content of said audio signal, f) outputting said optical effect from said video processing device, and simultaneously outputting said delayed audio signal from said means for delaying an audio signal, g) displaying said optical effect on said graphic display device, and simultaneously playing said delayed audio signal on said audio reproduction device.
14. The method of claim 12 wherein said means for delaying an audio signal comprises an audio signal buffer which delays the output of said delayed audio signal for a predetermined time delay.
15. The method of claim 12 wherein said audio signal is prerecorded on a recording medium and said means for delaying an audio signal comprises a device for reading said recording medium, said device having a first reading head for reading said prerecorded audio signal from said recording medium and inputting said audio signal into said audio signal processor, and a second reading head for reading said prerecorded audio signal from said recording medium at a predetermined time delay after said first reading head has read said prerecorded audio signal and outputting the delayed audio signal to said audio reproduction device.
16. The method of claim 1 wherein said beat in said audio signal has an amplitude and said optical effect has a magnitude which is proportional to said amplitude.
PCT/US1994/003181 1993-03-23 1994-03-23 Sound-to-light graphics system WO1994022128A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU69415/94A AU6941594A (en) 1993-03-23 1994-03-23 Sound-to-light graphics system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US3593393A 1993-03-23 1993-03-23
US08/035,933 1993-03-23

Publications (2)

Publication Number Publication Date
WO1994022128A1 true WO1994022128A1 (en) 1994-09-29
WO1994022128A9 WO1994022128A9 (en) 1994-11-10

Family

ID=21885621

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1994/003181 WO1994022128A1 (en) 1993-03-23 1994-03-23 Sound-to-light graphics system

Country Status (2)

Country Link
AU (1) AU6941594A (en)
WO (1) WO1994022128A1 (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1996016509A1 (en) * 1994-11-21 1996-05-30 David Althammer Video animation device based on the production of audio-influenced video signals
EP0918435A2 (en) * 1997-11-24 1999-05-26 Sony Electronics Inc. Systems and methods for processing audio or video effects
WO1999026412A1 (en) * 1997-11-19 1999-05-27 X.Ist Realtime Technologies Gmbh Unit and method for transforming and displaying acoustic signals
FR2798803A1 (en) * 1999-09-16 2001-03-23 Antoine Vialle Digital multimedia processing special image effects having operator defined multiple sources creating real time user combinations all sources dependent following several steps.
EP1151774A2 (en) * 2000-05-02 2001-11-07 Samsung Electronics Co., Ltd. Method for automatically creating dance patterns using audio signal
WO2004068495A1 (en) * 2003-01-31 2004-08-12 Miclip S.A. Method and device for controlling an image sequence run coupled to an audio sequence and corresponding programme
US7038683B1 (en) * 2000-01-28 2006-05-02 Creative Technology Ltd. Audio driven self-generating objects
US7400361B2 (en) 2002-09-13 2008-07-15 Thomson Licensing Method and device for generating a video effect
US8062089B2 (en) 2006-10-02 2011-11-22 Mattel, Inc. Electronic playset
US8292689B2 (en) 2006-10-02 2012-10-23 Mattel, Inc. Electronic playset
WO2015120333A1 (en) * 2014-02-10 2015-08-13 Google Inc. Method and system for providing a transition between video clips that are combined with a sound track
US9977643B2 (en) 2013-12-10 2018-05-22 Google Llc Providing beat matching
CN110085252A (en) * 2019-03-28 2019-08-02 体奥动力(北京)体育传播有限公司 The sound picture time-delay regulating method of race production center centralized control system
DE102014118075B4 (en) * 2014-01-08 2021-04-22 Adobe Inc. Perception model synchronizing audio and video

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5048390A (en) * 1987-09-03 1991-09-17 Yamaha Corporation Tone visualizing apparatus
US5243582A (en) * 1990-07-06 1993-09-07 Pioneer Electronic Corporation Apparatus for reproducing digital audio information related to musical accompaniments

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5048390A (en) * 1987-09-03 1991-09-17 Yamaha Corporation Tone visualizing apparatus
US5243582A (en) * 1990-07-06 1993-09-07 Pioneer Electronic Corporation Apparatus for reproducing digital audio information related to musical accompaniments

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1996016509A1 (en) * 1994-11-21 1996-05-30 David Althammer Video animation device based on the production of audio-influenced video signals
WO1999026412A1 (en) * 1997-11-19 1999-05-27 X.Ist Realtime Technologies Gmbh Unit and method for transforming and displaying acoustic signals
EP0918435A2 (en) * 1997-11-24 1999-05-26 Sony Electronics Inc. Systems and methods for processing audio or video effects
EP0918435A3 (en) * 1997-11-24 2002-07-17 Sony Electronics Inc. Systems and methods for processing audio or video effects
FR2798803A1 (en) * 1999-09-16 2001-03-23 Antoine Vialle Digital multimedia processing special image effects having operator defined multiple sources creating real time user combinations all sources dependent following several steps.
US7038683B1 (en) * 2000-01-28 2006-05-02 Creative Technology Ltd. Audio driven self-generating objects
EP1151774A2 (en) * 2000-05-02 2001-11-07 Samsung Electronics Co., Ltd. Method for automatically creating dance patterns using audio signal
EP1151774A3 (en) * 2000-05-02 2004-01-07 Samsung Electronics Co., Ltd. Method for automatically creating dance patterns using audio signal
US7400361B2 (en) 2002-09-13 2008-07-15 Thomson Licensing Method and device for generating a video effect
WO2004068495A1 (en) * 2003-01-31 2004-08-12 Miclip S.A. Method and device for controlling an image sequence run coupled to an audio sequence and corresponding programme
DE10304098B4 (en) * 2003-01-31 2006-08-31 Miclip S.A. Method and device for controlling a sequence of sound coupled image sequence and associated program
US8062089B2 (en) 2006-10-02 2011-11-22 Mattel, Inc. Electronic playset
US8292689B2 (en) 2006-10-02 2012-10-23 Mattel, Inc. Electronic playset
US9977643B2 (en) 2013-12-10 2018-05-22 Google Llc Providing beat matching
DE102014118075B4 (en) * 2014-01-08 2021-04-22 Adobe Inc. Perception model synchronizing audio and video
WO2015120333A1 (en) * 2014-02-10 2015-08-13 Google Inc. Method and system for providing a transition between video clips that are combined with a sound track
US9747949B2 (en) 2014-02-10 2017-08-29 Google Inc. Providing video transitions
US9972359B2 (en) 2014-02-10 2018-05-15 Google Llc Providing video transitions
CN110085252A (en) * 2019-03-28 2019-08-02 体奥动力(北京)体育传播有限公司 The sound picture time-delay regulating method of race production center centralized control system

Also Published As

Publication number Publication date
AU6941594A (en) 1994-10-11

Similar Documents

Publication Publication Date Title
US7876331B2 (en) Virtual staging apparatus and method
WO1994022128A1 (en) Sound-to-light graphics system
WO1994022128A9 (en) Sound-to-light graphics system
US6084169A (en) Automatically composing background music for an image by extracting a feature thereof
US7999167B2 (en) Music composition reproduction device and composite device including the same
JPH09500747A (en) Computer controlled virtual environment with acoustic control
JPH02502788A (en) Improvements in interactive video systems
US20030054882A1 (en) Game apparatus, method of reporducing movie images and recording medium recording program thereof
JP5241805B2 (en) Timing offset tolerance karaoke game
WO1997002558A1 (en) Music generating system and method
DeWitt Visual music: searching for an aesthetic
US7940370B2 (en) Interactive zoetrope rotomation
ES2356386T3 (en) METHOD FOR SUPPLYING AN AUDIO SIGNAL AND METHOD FOR GENERATING BACKGROUND MUSIC.
US7184051B1 (en) Method of and apparatus for rendering an image simulating fluid motion, with recording medium and program therefor
US20040082381A1 (en) System and method for video choreography
EP1085472A2 (en) Method of creating image frames, storage medium and apparatus for executing program
US20090015583A1 (en) Digital music input rendering for graphical presentations
US7053906B2 (en) Texture mapping method, recording medium, program, and program executing apparatus
JP2001269483A (en) Dynamic image reproducing method and music game device
GB2532034A (en) A 3D visual-audio data comprehension method
KR100383019B1 (en) Apparatus for authoring a music video
McGee et al. Voice of sisyphus: An image sonification multimedia installation
Parker Introduction to Game Development: Using Processing
Agamanolis High-level scripting environments for interactive multimedia systems
Greuel et al. Sculpting 3D worlds with music: advanced texturing techniques

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AT AU BB BG BR BY CA CH CN CZ DE DK ES FI GB HU JP KP KR KZ LK LU LV MG MN MW NL NO NZ PL PT RO RU SD SE SK UA UZ VN

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN ML MR NE SN TD TG

COP Corrected version of pamphlet

Free format text: PAGE 3,DESCRIPTION,AND PAGES 1/6-3/6,DRAWINGS,REPLACED BY NEW PAGES BEARING THE SAME NUMBER;AFTER RECTIFICATION OF OBVIOUS ERRORS AS AUTHORIZED BY THE INTERNATIONAL SEARCHING AUTHORITY

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

NENP Non-entry into the national phase

Ref country code: CA