CN118660369A - Dynamic light effect control method for piano lamp - Google Patents

Dynamic light effect control method for piano lamp Download PDF

Info

Publication number
CN118660369A
CN118660369A CN202411148520.7A CN202411148520A CN118660369A CN 118660369 A CN118660369 A CN 118660369A CN 202411148520 A CN202411148520 A CN 202411148520A CN 118660369 A CN118660369 A CN 118660369A
Authority
CN
China
Prior art keywords
note
frequency
lamplight
algorithm
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411148520.7A
Other languages
Chinese (zh)
Inventor
李家航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Walsun Lighting Huizhou Co ltd
Original Assignee
Walsun Lighting Huizhou Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Walsun Lighting Huizhou Co ltd filed Critical Walsun Lighting Huizhou Co ltd
Priority to CN202411148520.7A priority Critical patent/CN118660369A/en
Publication of CN118660369A publication Critical patent/CN118660369A/en
Pending legal-status Critical Current

Links

Landscapes

  • Auxiliary Devices For Music (AREA)

Abstract

The application provides a dynamic light effect control method of a piano lamp, which comprises the following steps: acquiring real-time audio signals played by a piano by adopting audio acquisition equipment, and performing spectrum analysis on the audio signals by a Fourier transform algorithm to obtain frequency distribution characteristics of piano sounds; according to the spectrum analysis result, identifying notes in the audio signal by using a peak detection algorithm, calculating the starting time, duration and pitch of each note, and constructing a note sequence data structure; predicting the frequency change of the note sequence through a Kalman filtering algorithm, and calculating the inertia characteristic of the note frequency to obtain the frequency change trend and speed; and generating a dynamic effect of light projection by adopting a graph rendering engine, and controlling the rhythm of light change according to the time information of the note sequence to realize synchronization of sound and light.

Description

Dynamic light effect control method for piano lamp
Technical Field
The invention relates to the technical field of information, in particular to a dynamic light effect control method of a piano lamp.
Background
The dynamic light effect of the piano lamp can bring good experience of light and shadow combination to the player when the piano is played, and in the process of realizing the control of the dynamic effect of the piano lamp, the piano sound frequency inertia analysis and light projection dynamic continuation system faces a key technical problem in the realization process that how to accurately capture and analyze the tiny change and the continuity characteristic of the piano sound frequency in a real-time playing environment. Because pianos are rich in tone, the frequency span between different notes is large, the playing speed and the dynamics are varied variously, and the traditional spectrum analysis method is difficult to accurately identify fine transitions between notes. Meanwhile, time delay exists between the sound frequency analysis result and the lamplight projection control, so that the synchronism of the audio-visual experience is affected. In addition, the light projection system needs to present continuous dynamic effects matched to the sound frequency variation in a limited projection area, which puts higher demands on the real-time performance and smoothness of the projection algorithm. How to extract effective frequency characteristics in a complex acoustic environment and quickly convert the effective frequency characteristics into smooth lamplight dynamic effects, and meanwhile, ensuring low delay and high precision of a system is a core challenge facing the technology. Solving this problem requires intensive research and innovation in various fields of sound signal processing, machine learning, real-time control, and the like.
Disclosure of Invention
The invention provides a dynamic light effect control method of a piano lamp, which mainly comprises the following steps:
Acquiring real-time audio signals played by a piano by adopting audio acquisition equipment, and performing spectrum analysis on the audio signals by a Fourier transform algorithm to obtain frequency distribution characteristics of piano sounds;
According to the spectrum analysis result, identifying notes in the audio signal by using a peak detection algorithm, calculating the starting time, duration and pitch of each note, and constructing a note sequence data structure;
Aiming at the note sequence data, a sliding window technology is applied to calculate the time interval and pitch difference between adjacent notes, the continuity and transition characteristics of the notes are judged, and a note continuity index is generated;
Predicting the frequency change of the note sequence through a Kalman filtering algorithm, and calculating the inertia characteristic of the note frequency to obtain the frequency change trend and speed;
Based on the note continuity index and the frequency inertia characteristic, designing a dynamic change rule of light projection, and mapping the note continuity and the frequency inertia to the color, brightness and position parameters of the light;
Generating a dynamic effect of light projection by adopting a graph rendering engine, and controlling the rhythm of light change according to the time information of the note sequence to realize synchronization of sound and light;
And the network communication module is used for sending the light control instruction to the projection equipment on the piano lamp, driving the projection equipment to adjust the projection picture in real time, and presenting the dynamic light effect matched with the piano performance.
The technical scheme provided by the embodiment of the invention can have the following beneficial effects:
The invention discloses a piano playing real-time audio signal processing and lamplight projection dynamic change generation method. In the service scene, the invention acquires real-time audio signals played by a piano through the acquisition equipment, performs spectrum analysis on the audio signals by utilizing a Fourier transform algorithm, and identifies notes in the audio signals through a peak detection algorithm. To ensure accurate note data, the present invention calculates the start time, duration and pitch of each note and constructs a note sequence data structure. Then, the present invention calculates the time interval and pitch difference between adjacent notes using a sliding window technique, thereby judging the continuity and transition characteristics of the notes, and generating a note continuity index. Based on the data, the invention predicts the frequency change of the note sequence through a Kalman filtering algorithm, thereby calculating the inertia characteristic of the note frequency and obtaining the trend and the speed of the frequency change.
A unique problem for this business scenario is how to transform real-time analyzed audio data into visual effects, i.e. dynamic light effects. According to the invention, through designing a dynamic change rule of lamplight projection, the continuity and frequency inertia of notes are mapped to color, brightness and position parameters of lamplight, so that synchronization of sound and lamplight is realized. In the process, the dynamic effect of the lamplight projection is generated by adopting the graphic rendering engine, and the rhythm of lamplight change is controlled according to the time information of the note sequence. Finally, the light control instruction is sent to the projection equipment on the piano lamp through the network communication module, the projection equipment is driven to adjust the projection picture in real time, and the dynamic light effect matched with the piano performance is presented.
In general, the present invention achieves an efficient combination of high precision processing of audio signals and real-time dynamic visual presentation. The technical effect not only improves the viewing experience of audiences and enhances the expressive force and the attractive force of the music performance, but also provides an innovative interaction mode for the live performance, so that the combination of the music and the visual art is more compact and diversified.
Drawings
Fig. 1 is a flowchart of a dynamic light effect control method of a piano lamp according to the present invention.
Fig. 2 is a schematic diagram of a dynamic light effect control method of a piano lamp according to the present invention.
Fig. 3 is a further schematic diagram of a dynamic light effect control method of a piano lamp according to the present invention.
Detailed Description
For a further understanding of the present application, the present application will be described in detail with reference to the drawings and examples. The application is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be noted that, for convenience of description, only the portions related to the application are shown in the drawings.
1-3, The dynamic light effect control method of the piano lamp specifically comprises the following steps:
step S101, acquiring real-time audio signals played by a piano by adopting an audio acquisition device, and performing frequency spectrum analysis on the audio signals through a Fourier transform algorithm to obtain frequency distribution characteristics of piano sounds.
The real-time audio signal of piano performance is obtained by adopting the audio acquisition equipment, and the analog audio signal is converted into a digital signal through the analog-to-digital converter. According to the nyquist sampling theorem, the sampling frequency needs to be 2 times greater than the highest frequency of the audio signal to avoid aliasing distortion. After the digital audio signal is obtained, the audio signal is subjected to spectrum analysis by adopting a fast Fourier transform algorithm, and the time domain signal is converted into a frequency domain signal. The frequency distribution characteristics of the piano sound are obtained through frequency spectrum analysis, wherein the frequency distribution characteristics comprise frequency components such as fundamental frequency, overtones and the like. And judging the notes corresponding to the main frequency components in the frequency spectrum according to the standard frequencies of the piano notes. The dominant frequencies within each time frame are determined using a frequency identification based pitch detection algorithm and mapped to corresponding MIDI note numbers. Through continuous analysis of the time frames, a complete MIDI note sequence is obtained, representing melody and chord information of the piano performance. According to the MIDI note sequence, a score generation algorithm is adopted to generate a score representation of piano performance, which comprises information such as pitch, time value, dynamics and the like. And comparing the generated music score with a standard music score through a music score alignment algorithm, and calculating performance accuracy and expressive force indexes. And evaluating and feeding back the skill and level of the player according to the evaluation index.
For example, during the acquisition of real-time audio signals for a piano performance, a high-fidelity capacitive microphone may be used, placed 20 cm above the piano soundboard, to obtain a clear audio signal. The sampling frequency is set to 41kHz, which is 2 times higher than the fundamental frequency 4186Hz of the highest note of the piano, so that the generation of aliasing distortion is avoided. The acquired analog audio signals are converted into digital signals through a 24-bit analog-to-digital converter, so that the dynamic range and the signal-to-noise ratio of the signals are ensured. The digital audio signal is subjected to a fast fourier transform and windowed using a Hanning window function, the window length being set to 2048 sample points, the corresponding frequency resolution being about 25Hz. Through spectrum analysis, the energy distribution characteristics of the piano sound on fundamental frequency and overtones are obtained. According to the standard frequencies of 88 keys of the piano, a mapping table of frequencies and MIDI note numbers is established. The fundamental frequency of each time frame is calculated by an autocorrelation function using a frequency-based pitch detection algorithm YIN and quantized to the nearest MIDI note number. By continuously analyzing the time frames of the audio signal, a complete MIDI note sequence is obtained reflecting the pitch, duration and dynamics of the piano performance. Based on the MIDI note sequence, a music score generation algorithm is adopted, parameters such as the time value, the dynamics, the speed and the like of notes are considered, and the staff representation conforming to the music theory rule is generated. And (3) aligning the generated music score with a standard music score through a dynamic time warping algorithm, and calculating performance deviation and dynamics error of each note to obtain quantitative indexes of performance accuracy and expressive force. Finally, comprehensively evaluating the skill level of the player according to the evaluation index, and providing targeted exercise advice and feedback to help the player to improve the performance quality.
Step S102, according to the spectrum analysis result, the notes in the audio signal are identified by using a peak detection algorithm, the starting time, duration and pitch of each note are calculated, and a note sequence data structure is constructed.
And identifying notes in the audio signal by adopting a peak detection algorithm according to the spectrum analysis result of the audio signal. The peak detection algorithm determines the locations of peaks in the spectrum that exceed the threshold as the frequencies of the notes by setting the threshold. And calculating the fundamental frequency of each note by adopting a fundamental frequency estimation algorithm according to the note frequency obtained by spectrum analysis, so as to further determine the pitch of the note. The starting time and duration of the note are determined by analyzing the change of the note frequency with time. Information such as the start time, duration, and pitch of the identified notes is constructed as a note sequence data structure. The note sequence data structure includes the attributes of the start time, duration, pitch, etc. of each note, and the precedence relationship between notes. And obtaining high-level music information such as melody, rhythm and the like in the audio signal through analyzing the note sequence data structure.
Illustratively, first, an audio signal is subjected to spectrum analysis, and a Fast Fourier Transform (FFT) algorithm is used to convert a time domain signal into a frequency domain signal, so as to obtain a spectrogram. The peak detection threshold is then set to 20% of the spectral amplitude and the frequency above the threshold is determined to be the frequency of the note. For example, if a peak is detected near 266Hz for the C4 note, then the C4 note is deemed to be included in the audio. Next, the frequency spectrum of each note is estimated using autocorrelation, and the fundamental frequency of the note is determined by finding the first peak position of the autocorrelation function. If the fundamental frequency of a C4 note is estimated to be 260Hz, then the note is at a pitch of C4. The starting time and duration of the note are determined by analyzing the change of the note frequency with time, taking 50ms as a time window. For example, if a C4 note is detected within a period of 2s to 8s, the C4 note starts at a time of 2s and lasts for 6s. Finally, the identified note information is constructed as a note sequence data structure. The attributes of the start time, duration, pitch, etc. of each note are stored in time sequence in the structure. By analyzing the note sequence, high-level information such as melody, rhythm and the like in the audio can be obtained. For example, by counting the duration distribution of each note, the tempo type of the audio can be judged to be 4/4 beat. For another example, the melody may be judged to be an ascending tone by analyzing the trend of the pitch in the sequence of notes.
Step S103, aiming at the note sequence data, a sliding window technology is applied to calculate the time interval and pitch difference between adjacent notes, and the continuity and transition characteristics of the notes are judged to generate a note continuity index.
According to the input note sequence data, a sliding window technology is adopted to acquire the time interval and pitch difference between adjacent notes. And judging whether adjacent notes are continuous or not by setting a time interval threshold and a pitch difference threshold. If the time interval of adjacent notes is equal to or less than the time interval threshold and the pitch difference is equal to or less than the pitch difference threshold, then the two notes are considered to be continuous, otherwise they are considered to be discontinuous. For a sequence of consecutive notes, a Markov chain model is used to calculate a transition probability matrix between notes. And obtaining the transition characteristics of the note sequence through the transition probability matrix. And determining a continuity index of the note sequence according to the transition characteristics. The continuity indicator may determine whether the sequence of notes are consecutive by setting a threshold. If the continuity indicator is greater than or equal to the threshold value, the sequence of notes is considered to be consecutive, otherwise it is considered to be non-consecutive. For discontinuous note sequences, a dynamic time warping algorithm is used to align and warp the discontinuous note sequences. The time interval and pitch difference between adjacent notes are recalculated by the aligned and normalized note sequences and the above steps are repeated until all note sequences are judged to be continuous or discontinuous. Finally, according to the consistency index of all note sequences, calculating the consistency index of the whole music piece. And judging whether the whole music piece is coherent or not by setting the overall consistency index threshold value. If the overall consistency index is greater than or equal to the threshold value, the music pieces are considered to be consistent, otherwise, the music pieces are considered to be incoherent.
For example, in the note sequence data, a time interval threshold value of 5 seconds and a pitch difference threshold value of 2 semitones may be set. By the sliding window technique, the time interval and pitch difference between adjacent notes are calculated sequentially in 1 second step. For example, for a certain sequence of notes, adjacent notes are separated by 3 seconds, and the pitch difference is 1 semitone, then the two notes are considered to be consecutive. For a sequence of consecutive notes, a Markov chain model is used to calculate a transition probability matrix between notes. Assuming that the transition probability from note a to note B is 8 and from note B to note C is 6 in the transition probability matrix, the transition feature of the note sequence can be obtained. According to the transition characteristics, a continuity index threshold value is set to be 7, and if the continuity index is greater than or equal to 7, the note sequence is considered to be continuous. And for the discontinuous note sequences, a dynamic time warping algorithm is adopted, an optimal alignment mode among the note sequences is calculated in a dynamic programming mode, and the discontinuous note sequences are aligned and warped. The time interval and pitch difference between adjacent notes are recalculated and the above steps are repeated until all note sequences are judged to be continuous or discontinuous. Finally, according to the consistency indexes of all the note sequences, setting the overall consistency index threshold value as 8, and calculating the consistency index of the whole music piece. If the overall consistency index is greater than or equal to 8, the whole music piece is considered to be consistent, otherwise, the whole music piece is considered to be incoherent. By the method, the consistency of the music fragments can be effectively judged, and an important reference basis is provided for music analysis and processing.
Step S104, predicting the frequency change of the note sequence through a Kalman filtering algorithm, and calculating the inertia characteristic of the note frequency to obtain the frequency change trend and the frequency change speed.
And predicting the frequency change of the note sequence by adopting a Kalman filtering algorithm according to the frequency change of the note sequence. First, frequency data of a note sequence is acquired as input to a kalman filter algorithm. Then, modeling the frequency change of the note sequence through a Kalman filtering algorithm to obtain a state equation and an observation equation of the note frequency change. And then, according to the state equation and the observation equation, predicting the frequency change of the note sequence by adopting a Kalman filtering algorithm to obtain a predicted value of the note frequency at a future time. The inertial characteristics of the note frequencies are calculated based on the predicted note frequency changes. The velocity of the note frequency change is obtained by first-order difference of the note frequency predicted value. And obtaining the acceleration of the note frequency change through the second-order difference of the note frequency predicted value. The velocity and acceleration of the note frequency change are taken as the inertia characteristics of the note frequency. And judging the change trend and speed of the note frequency according to the predicted value and the inertia characteristic of the note frequency. If the note frequency predicted value shows an ascending trend and the change speed is high, the note frequency change trend is judged to be ascending and the change speed is high. If the predicted value of the note frequency shows a decreasing trend and the changing speed is slower, the change trend of the note frequency is judged to be decreasing, and the changing speed is slow. And obtaining the change trend and speed of the note frequency through analyzing the note frequency predicted value and the inertia characteristic.
Illustratively, a sequence of notes played by a piano is first obtained, comprising 100 notes, each of which has a frequency in the range of 266Hz to 1045 Hz. These frequency data are input into a Kalman filtering algorithm, and a state equation and an observation equation of the note frequency change are established. The state equation describes the dynamic course of the note frequency change, assuming that the change in note frequency follows a first order markov process, i.e. the frequency at the current moment is only related to the frequency at the previous moment. The observation equation describes the relationship between the observed frequency and the true frequency, assuming that the observed noise follows a gaussian distribution. And estimating an optimal estimated value of the note frequency at each moment and a covariance matrix of the estimated error through a Kalman filtering algorithm. And calculating a first-order difference and a second-order difference of the note frequency according to the estimated value, and obtaining the speed and the acceleration of the note frequency change. For example, at the 50 th note, the predicted frequency is 523Hz, and the first order difference is 12Hz compared with the predicted frequency at the previous moment, indicating that the note frequency rises faster; the second order difference is 5Hz compared to the predicted frequency at the first two moments, indicating a slow rate of note frequency rise. From these inertia characteristics, the trend and the velocity of the note frequency change are determined. And between the 50 th note and the 80 th note, the note frequency predicted value shows a continuous rising trend, the first-order difference is larger than 5Hz, and the second-order difference is larger than 1Hz, so that the note frequency variation trend is judged to be rapidly rising in the period of time. After the 80 th note, the note frequency predicted value shows a descending trend, the first-order difference is smaller than-3 Hz, and the second-order difference is smaller than-5 Hz, so that the note frequency change trend is judged to be slowly descending in the period of time. Through analysis of the note frequency predicted value and the inertia characteristic, the change condition of the note frequency can be judged in real time, and a basis is provided for analysis and generation of the music expression.
Step S105, based on the note continuity index and the frequency inertia characteristic, a dynamic change rule of the lamplight projection is designed, and the note continuity and the frequency inertia are mapped to the color, the brightness and the position parameters of lamplight.
And according to the consistency index and the frequency inertia characteristic of the musical notes, carrying out characteristic extraction on the note sequence by adopting a convolutional neural network algorithm to obtain the note consistency index and the frequency inertia characteristic vector. And classifying the extracted feature vectors through a support vector machine algorithm, and judging the consistency and the frequency inertia degree of the note sequence. If the consistency index of the note sequence is higher than a preset threshold value, mapping the note consistency into continuous gradual change of the lamplight color; if the continuity index of the note sequence is lower than a preset threshold value, the note continuity is mapped into random change of lamplight color. And clustering the frequency inertia characteristics by adopting a K-means clustering algorithm according to the frequency inertia characteristic vector of the note sequence to obtain clustering centers under different frequency inertia levels. And determining the frequency inertia level of the current note through Euclidean distance between the clustering center and the frequency inertia characteristic of the current note. If the frequency inertia level of the current note is higher, mapping the frequency inertia to the enhancement of the lamplight brightness; if the frequency inertia level of the current note is low, the frequency inertia is mapped to a decrease in light intensity. And calculating corresponding lamplight projection position coordinates by adopting a linear interpolation algorithm according to the time positions of the notes in the music piece, and obtaining the mapping relation between the time positions of the notes and the lamplight projection positions. And determining final lamplight projection dynamic change parameters including color, brightness, position coordinates and the like by adopting a weighted average algorithm through merging lamplight color change rules mapped by the note consistency indexes, lamplight brightness change rules mapped by the frequency inertia characteristics and lamplight projection position coordinates mapped by the note time positions. According to the determined dynamic change parameters of the lamplight projection, the lamplight projection equipment is controlled to adjust the color, brightness and position of the projected lamplight in real time, and synchronous mapping of the music note characteristics and the dynamic change of the lamplight is realized. And continuously acquiring a note sequence in the music playing process, repeatedly executing the steps, and dynamically updating the light projection effect to form light artistic expression corresponding to the music rhythm and emotion.
Illustratively, the feature extraction is performed on the note sequence by using a convolutional neural network algorithm according to the continuity index and the frequency inertia feature of the musical notes. First, the note sequence is converted into a frequency-time matrix of 128×128, representing 128 time steps and 128 frequency bins. Then, the matrix is subjected to feature extraction by using 3 layers of convolution layers and 2 layers of pooling layers, the convolution kernel sizes are 3×3,5×5 and 7×7 respectively, each layer of convolution is followed by a ReLU activation function, the pooling layers adopt maximum pooling, the pooling kernel size is 2×2, and the step length is 2. After the convolution and pooling operation, a 256-dimensional feature vector is obtained, which represents the consistency index and the frequency inertia characteristic of the note sequence. Then, the extracted feature vectors are input into a support vector machine for classification, and a Gaussian kernel function is used to judge the consistency and the frequency inertia degree of the note sequence by taking 10 from the penalty coefficient C. If the consistency index is higher than 8, the consistency of the notes is mapped into continuous gradual change of the lamplight color, the color change speed is in direct proportion to the consistency index, and the color change is flatter as the index is higher. If the continuity index is lower than 8, the note continuity is mapped into random change of lamplight color, and the change range is RGB color gamut. And for the frequency inertia characteristics, clustering by adopting a K-means clustering algorithm, and setting the clustering number K to be 3 to obtain the clustering centers of the low, medium and high frequency inertia levels. And calculating Euclidean distance between the frequency inertia characteristic of the current note and the center of each cluster, wherein the cluster with the smallest distance is the frequency inertia level of the current note. If the light belongs to the high-frequency inertia level, the light brightness is enhanced by 20%; if the light belongs to the low-frequency inertia level, the light brightness is reduced by 20 percent. And normalizing the time to be within a range of 0-1 according to the time position of the note in the 4-minute music piece, and multiplying the time by the length of the light projection area to obtain the light projection position coordinate corresponding to the time position of the note. And finally, synthesizing the lamplight color change mapped by the note consistency index, the lamplight brightness change mapped by the frequency inertia characteristic and the lamplight projection position coordinates mapped by the note time position, and determining the final lamplight projection parameters by adopting a weighted average algorithm. Wherein the color change weight is 4, the brightness change weight is 35, and the position change weight is 25. By adjusting the color, brightness and position of the lamplight in real time, synchronous mapping of music note characteristics and lamplight dynamic changes is realized, and an immersive lamplight artistic atmosphere corresponding to music rhythm and emotion is created.
And S106, generating a dynamic effect of light projection by adopting a graphic rendering engine, and controlling the rhythm of light change according to the time information of the note sequence to realize synchronization of sound and light.
And carrying out frequency domain analysis on the audio signal by adopting a Fourier transform algorithm according to the time information of the note sequence to obtain the frequency and amplitude characteristics of the audio. And mapping the audio frequency to the corresponding lamplight color through a frequency mapping algorithm, and determining the lamplight brightness according to the amplitude. And constructing a virtual scene by adopting a three-dimensional graphic rendering engine, and dynamically generating a lamplight projection effect according to the audio frequency and amplitude characteristics. And aligning the audio time axis with the lamplight animation time axis through a time synchronization algorithm, so as to realize synchronous change of sound and lamplight. In the rendering process, the color and brightness of the lamplight projection are dynamically adjusted according to the change of the audio frequency, and smooth transition of lamplight effect is realized through an interpolation algorithm. Meanwhile, the light position in the virtual scene is mapped to the actual physical space by adopting a space mapping algorithm, the actual light equipment is controlled by the light control system, and the virtual light effect is presented in the physical environment. And finally, dynamically adjusting the light effect through a real-time feedback algorithm, and updating the light projection in real time according to the change of the audio signal so as to keep the synchronism and consistency of the sound and the light.
The audio signal is sampled, for example, at a sampling frequency of 41kHz and a number of sampling bits of 16 bits. Then, the sampled audio data is divided into frames with 1024 sampling points in length, and a Fast Fourier Transform (FFT) is performed on each frame of data to obtain a frequency spectrum of the frame. Next, the frequency spectrum is divided into 20 frequency bands, each corresponding to a color, e.g., low frequency band corresponding to red, medium frequency band corresponding to green, high frequency band corresponding to blue, etc. According to the energy of each frequency band, the brightness value mapped to the corresponding color is higher as the energy is larger. A virtual scene is created in the Unity3D engine, and the scene contains a sphere model for representing light projection. The color and brightness of the sphere are dynamically adjusted according to the frequency and amplitude characteristics of the audio. For example, when low frequency energy is dominant, the sphere appears red, and when high frequency energy is dominant, the sphere appears blue. By means of a linear interpolation algorithm, smooth transition is carried out between different colors, and color change is prevented from being too abrupt. To achieve synchronization of sound and light, it is necessary to align the audio time axis with the light animation time axis. Assuming that the duration of the audio is 120 seconds and the frame rate of the light animation is 30 frames/second, the total frame number of the light animation is 3600 frames. And mapping the playing progress of the audio to the frame index of the lamplight animation through a time mapping function, so as to realize synchronous change of the playing progress and the frame index. In an actual physical environment, the light equipment is controlled through a DMX512 protocol. According to the position of the sphere in the virtual scene, a corresponding DMX address is calculated, and the color and brightness values of the sphere are sent to corresponding light equipment, so that the virtual light effect is displayed in the real environment. Meanwhile, the light effect is dynamically adjusted according to the change of the audio signal through a real-time feedback mechanism, and the synchronism of sound and light is maintained. For example, when the audio energy is suddenly increased, the brightness of the light is also increased, and when the audio energy is reduced, the brightness of the light is also reduced. Through the real-time feedback, the dynamic light effect synchronous with the music rhythm can be created.
Step S107, the light control instruction is sent to the projection equipment on the piano lamp through the network communication module, the projection equipment is driven to adjust the projection picture in real time, and the dynamic light effect matched with the piano performance is presented.
And according to the musical notes and rhythm information played by the piano, performing characteristic extraction and analysis on the audio data by adopting a convolutional neural network algorithm to obtain parameters such as time, intensity, pitch and the like of the musical notes. And classifying and identifying the extracted audio features through a support vector machine algorithm, and judging the type and the style of the musical note played currently. And generating a lamplight control instruction matched with the identified note type and the style by adopting a decision tree algorithm, wherein the lamplight control instruction comprises parameters such as lamplight color, brightness, flicker frequency and the like. And the generated light control instruction is sent to the projection equipment on the piano lamp in real time through the network communication module. After receiving the light control instruction, the projection equipment analyzes the instruction by adopting an optical flow algorithm to obtain specific parameters of the light effect. And according to the analyzed lamplight parameters, the lamplight effect is rendered to the appointed position of the piano surface in real time through a projection mapping algorithm, so that dynamic lamplight projection matched with piano performance is formed. In the light projection process, the Kalman filtering algorithm is continuously adopted to filter the audio data, so that noise interference is removed, and the accuracy of note identification is improved. Meanwhile, the self-adaptive gain control algorithm is adopted to adjust the brightness of the lamplight in real time, so that the optimal lamplight effect can be ensured to be displayed under different environment illumination conditions. Through the loop iteration of the steps, the real-time synchronization of piano playing and lamplight projection is realized, and an immersive music playing atmosphere is created. In the whole process, a long-short-term memory neural network algorithm is adopted to learn and memorize historical note data and lamplight control instructions, and the generation strategy of lamplight effect is continuously optimized, so that lamplight projection can be dynamically adapted to the piano playing style, and the artistic expression of playing is improved.
Illustratively, during a piano playing process, the piano audio signal is acquired in real time through an audio acquisition device with a sampling frequency of 41kHz and a bit depth of 16 bits. The collected audio data is preprocessed through a pre-emphasis filter, and then the audio data is subjected to feature extraction by using a Mel Frequency Cepstrum Coefficient (MFCC), so that a 13-dimensional MFCC feature vector is extracted. The extracted MFCC feature vector is input into a convolutional neural network consisting of 3 convolutional layers and 2 fully connected layers for feature analysis, the convolutional kernel sizes are 3x3, 4x4 and 5x5 respectively, a ReLU activation function and a maximum pooling operation are used after each layer of convolution, and parameters such as time, intensity and pitch of notes are obtained through network training. The note characteristic parameters output by the convolutional neural network are input into a Support Vector Machine (SVM) classifier to identify note types and performance styles, the SVM adopts RBF kernel functions, and an optimal penalty factor C and kernel function parameters g are found through a grid search optimization algorithm, so that the identification accuracy of the classifier reaches more than 95%. According to the note type and the playing style identified by the SVM, a decision tree algorithm is adopted to generate a light control instruction matched with the note type, the feature selection of the decision tree adopts a base index, and pruning optimization is carried out on the decision tree in a 10-fold cross verification mode, so that the matching degree of the generated light control instruction and the note type is more than 90%. And the light control instruction generated by the decision tree is sent to the piano light projection equipment in real time through the network communication module, the projection equipment performs image rendering, the instruction is analyzed by using a light flow algorithm after the control instruction is received, the light effect is rendered to the piano surface in real time by using a projection mapping algorithm according to parameters such as light color, brightness, flicker frequency and the like contained in the instruction, the rendering frame rate reaches 60fps, and the light projection and piano performance are accurately synchronized. In the light rendering process, the Kalman filtering algorithm is continuously used for filtering the audio data, the order and the bandwidth of the filter are adaptively adjusted, the interference of environmental noise is effectively removed, and the accuracy of note identification is improved by more than 5%. Meanwhile, the self-adaptive gain control algorithm is adopted to adjust the light brightness in real time, the brightness value of the light projection is dynamically adjusted according to the ambient illumination intensity and the reflection characteristics of the piano surface material, the brightness adjusting range is 0-255, and the best light effect can be ensured to be displayed under different illumination conditions. Finally, a long-short-time memory network consisting of 3 layers of LSTM neural networks is used for learning and memorizing historical note data and light control instructions, the number of hidden layer nodes of the LSTM is 128, 256 and 128 respectively, and the light effect generation strategy can be dynamically adapted to the piano playing style through continuous parameter tuning and model iteration, so that the synchronization rate of light projection and music rhythm in the playing process is more than 98%, and the artistic expression of piano playing is greatly improved.
The foregoing disclosure is illustrative of the preferred embodiments of the present invention, and is not to be construed as limiting the scope of the invention, as it is understood by those skilled in the art that all or part of the above-described embodiments may be practiced with equivalents thereof, which fall within the scope of the invention as defined by the appended claims.

Claims (8)

1. A method for dynamic light effect control of a piano lamp, the method comprising:
Acquiring real-time audio signals played by a piano by adopting audio acquisition equipment, and performing spectrum analysis on the audio signals by a Fourier transform algorithm to obtain frequency distribution characteristics of piano sounds;
According to the spectrum analysis result, identifying notes in the audio signal by using a peak detection algorithm, calculating the starting time, duration and pitch of each note, and constructing a note sequence data structure;
Aiming at the note sequence data, a sliding window technology is applied to calculate the time interval and pitch difference between adjacent notes, the continuity and transition characteristics of the notes are judged, and a note continuity index is generated;
Predicting the frequency change of the note sequence through a Kalman filtering algorithm, and calculating the inertia characteristic of the note frequency to obtain the frequency change trend and speed;
Based on the note continuity index and the frequency inertia characteristic, designing a dynamic change rule of light projection, and mapping the note continuity and the frequency inertia to the color, brightness and position parameters of the light;
Generating a dynamic effect of light projection by adopting a graph rendering engine, and controlling the rhythm of light change according to the time information of the note sequence to realize synchronization of sound and light;
And the network communication module is used for sending the light control instruction to the projection equipment on the piano lamp, driving the projection equipment to adjust the projection picture in real time, and presenting the dynamic light effect matched with the piano performance.
2. The method of claim 1, wherein the acquiring the real-time audio signal of the piano performance by the audio acquisition device, performing a frequency spectrum analysis on the audio signal by a fourier transform algorithm to obtain the frequency distribution characteristics of the piano sound, comprises:
Acquiring real-time audio signals of piano performance by adopting audio acquisition equipment, and converting the analog audio signals into digital signals through an analog-to-digital converter;
According to the nyquist sampling theorem, the sampling frequency needs to be 2 times greater than the highest frequency of the audio signal to avoid aliasing distortion;
After the digital audio signal is obtained, performing spectrum analysis on the audio signal by adopting a fast Fourier transform algorithm, and converting a time domain signal into a frequency domain signal;
Obtaining the frequency distribution characteristics of piano sound through frequency spectrum analysis, wherein the frequency distribution characteristics comprise fundamental frequency and overtone frequency components;
judging notes corresponding to main frequency components in the frequency spectrum according to standard frequencies of piano notes;
Determining a dominant frequency in each time frame by adopting a pitch detection algorithm based on frequency identification, and mapping the dominant frequency to a corresponding MIDI note number;
Obtaining a complete MIDI note sequence through continuous analysis of time frames, wherein the complete MIDI note sequence represents melody and chord information of piano playing;
According to the MIDI note sequence, a music score generating algorithm is adopted to generate a music score representation of piano performance, wherein the music score representation comprises pitch, a time value and dynamics information;
Comparing the generated music score with a standard music score through a music score alignment algorithm, and calculating performance accuracy and expressive force indexes;
And evaluating and feeding back the skill and level of the player according to the evaluation index.
3. The method of claim 1, wherein the identifying notes in the audio signal using a peak detection algorithm based on the spectral analysis results, calculating a start time, duration and pitch of each note, constructing a note sequence data structure, comprises:
identifying notes in the audio signal by adopting a peak detection algorithm according to the spectrum analysis result of the audio signal;
the peak detection algorithm determines the peak position exceeding the threshold value in the frequency spectrum as the frequency of the note by setting the threshold value;
According to the note frequency obtained by spectrum analysis, calculating the fundamental frequency of each note by adopting a fundamental frequency estimation algorithm, and further determining the pitch of the note;
Judging the starting time and duration of the notes by analyzing the change of the note frequency with time;
Constructing the start time, duration and pitch information of the identified notes as a note sequence data structure;
The note sequence data structure comprises the starting time, duration and pitch attribute of each note and the precedence relation among notes;
melody and rhythm information in the audio signal is obtained through analysis of the note sequence data structure.
4. A method according to claim 1, wherein the calculating a time interval and a pitch difference between adjacent notes for the note sequence data using a sliding window technique, determining continuity and transition characteristics of notes, and generating a note continuity indicator comprises:
acquiring the time interval and pitch difference between adjacent notes by adopting a sliding window technology according to the input note sequence data;
judging whether adjacent notes are continuous or not by setting a time interval threshold and a pitch difference threshold;
If the time interval of adjacent notes is less than or equal to the time interval threshold and the pitch difference is less than or equal to the pitch difference threshold, the two notes are considered to be continuous, otherwise, the notes are considered to be discontinuous;
For a continuous note sequence, calculating a transition probability matrix between notes by adopting a Markov chain model;
obtaining transition characteristics of the note sequence through a transition probability matrix;
determining a consistency index of the note sequence according to the transition characteristics;
the continuity index judges whether the note sequences are continuous or not by setting a threshold value;
if the continuity index is greater than or equal to the threshold value, the note sequence is considered to be continuous, otherwise, the note sequence is considered to be discontinuous;
For the discontinuous note sequence, adopting a dynamic time warping algorithm to align and warp the discontinuous note sequence;
recalculating the time interval and pitch difference between adjacent notes by the aligned and normalized note sequences and repeating the above steps until all note sequences are judged to be continuous or discontinuous;
Finally, according to the consistency indexes of all note sequences, calculating the consistency indexes of the whole music piece;
judging whether the whole music piece is coherent or not by setting a whole consistency index threshold value;
If the overall consistency index is greater than or equal to the threshold value, the music pieces are considered to be consistent, otherwise, the music pieces are considered to be incoherent.
5. The method of claim 1, wherein predicting the frequency variation of the sequence of notes by a kalman filter algorithm, calculating the inertia characteristics of the note frequencies, and obtaining the frequency variation trend and velocity, comprises:
According to the frequency change of the note sequence, predicting the frequency change of the note sequence by adopting a Kalman filtering algorithm;
firstly, obtaining frequency data of a note sequence, and taking the frequency data as input of a Kalman filtering algorithm;
Then modeling the frequency change of the note sequence through a Kalman filtering algorithm to obtain a state equation and an observation equation of the note frequency change;
Then, according to the state equation and the observation equation, predicting the frequency change of the note sequence by adopting a Kalman filtering algorithm to obtain a predicted value of the note frequency at a future time;
calculating inertia characteristics of the note frequency based on the predicted note frequency variation;
obtaining the speed of note frequency change through the first-order difference of the note frequency predicted value;
obtaining the acceleration of the note frequency change through the second-order difference of the note frequency predicted value;
taking the speed and acceleration of the change of the note frequency as the inertia characteristics of the note frequency;
judging the change trend and speed of the note frequency according to the predicted value and inertia characteristic of the note frequency;
if the note frequency predicted value shows an ascending trend and the change speed is high, judging that the note frequency change trend is ascending and the change speed is high;
if the note frequency predicted value shows a descending trend and the change speed is slower, judging that the note frequency change trend is descending and the change speed is slow;
And obtaining the change trend and speed of the note frequency through analyzing the note frequency predicted value and the inertia characteristic.
6. The method of claim 1, wherein the designing a dynamic variation rule of the light projection based on the note continuity index and the frequency inertia feature maps the note continuity and the frequency inertia to color, brightness, and location parameters of the light, comprising:
According to the consistency index and the frequency inertia characteristic of the musical notes, a convolution neural network algorithm is adopted to conduct characteristic extraction on the note sequence, and the note consistency index and the frequency inertia characteristic vector are obtained;
Classifying the extracted feature vectors through a support vector machine algorithm, and judging the consistency and the frequency inertia degree of the note sequences;
If the consistency index of the note sequence is higher than a preset threshold value, mapping the note consistency into continuous gradual change of the lamplight color;
If the continuity index of the note sequence is lower than a preset threshold value, mapping the note continuity into random change of lamplight color;
According to the frequency inertia feature vector of the note sequence, clustering the frequency inertia feature by adopting a K-means clustering algorithm to obtain clustering centers under different frequency inertia levels;
determining the frequency inertia level of the current note through Euclidean distance between the clustering center and the current note frequency inertia characteristic;
if the frequency inertia level of the current note is higher, mapping the frequency inertia to the enhancement of the lamplight brightness;
If the frequency inertia level of the current note is lower, mapping the frequency inertia to weakening of the light brightness;
calculating corresponding lamplight projection position coordinates by adopting a linear interpolation algorithm according to the time positions of notes in the music piece to obtain the mapping relation between the time positions of the notes and the lamplight projection positions;
Determining final lamplight projection dynamic change parameters including color, brightness and position coordinates by adopting a weighted average algorithm through merging lamplight color change rules mapped by the note consistency index, lamplight brightness change rules mapped by the frequency inertia characteristic and lamplight projection position coordinates mapped by the note time position;
According to the determined dynamic change parameters of the lamplight projection, controlling the lamplight projection equipment to adjust the color, brightness and position of the projected lamplight in real time, and realizing synchronous mapping of music note characteristics and the dynamic change of the lamplight;
And continuously acquiring a note sequence in the music playing process, repeatedly executing the steps, and dynamically updating the light projection effect to form light artistic expression corresponding to the music rhythm and emotion.
7. The method of claim 1, wherein the generating the dynamic effect of the light projection using the graphics rendering engine controls the rhythm of the light variation according to the time information of the note sequence to achieve synchronization of the sound and the light, comprises:
according to the time information of the note sequence, carrying out frequency domain analysis on the audio signal by adopting a Fourier transform algorithm to obtain the frequency and amplitude characteristics of the audio;
Mapping the audio frequency to the corresponding lamplight color through a frequency mapping algorithm, and determining the lamplight brightness according to the amplitude;
constructing a virtual scene by adopting a three-dimensional graphic rendering engine, and dynamically generating a lamplight projection effect according to the audio frequency and amplitude characteristics;
Aligning an audio time axis with a lamplight animation time axis through a time synchronization algorithm to realize synchronous change of sound and lamplight;
in the rendering process, dynamically adjusting the color and brightness of the lamplight projection according to the change of the audio frequency, and realizing smooth transition of the lamplight effect through an interpolation algorithm;
meanwhile, mapping the lamplight position in the virtual scene to an actual physical space by adopting a space mapping algorithm, controlling the actual lamplight equipment by a lamplight control system, and presenting the virtual lamplight effect in the physical environment;
And finally, dynamically adjusting the light effect through a real-time feedback algorithm, and updating the light projection in real time according to the change of the audio signal so as to keep the synchronism and consistency of the sound and the light.
8. The method of claim 1, wherein the sending the light control command to the projection device on the piano lamp through the network communication module drives the projection device to adjust the projection screen in real time, and presents the dynamic light effect matched with the piano performance, comprises:
according to the musical notes and rhythm information played by the piano, performing feature extraction and analysis on the audio data by adopting a convolutional neural network algorithm to obtain the time, the dynamics and the pitch parameters of the musical notes;
Classifying and identifying the extracted audio features through a support vector machine algorithm, and judging the type and the style of the musical note played currently;
Generating a lamplight control instruction matched with the identified note type and the style of the musical performance by adopting a decision tree algorithm, wherein the lamplight control instruction comprises lamplight color, brightness and flicker frequency parameters;
the generated light control instruction is sent to the projection equipment on the piano lamp in real time through the network communication module;
After receiving the light control instruction, the projection equipment analyzes the instruction by adopting an optical flow algorithm to obtain specific parameters of the light effect;
According to the analyzed lamplight parameters, the lamplight effect is rendered to the appointed position of the piano surface in real time through a projection mapping algorithm, and dynamic lamplight projection matched with piano performance is formed;
in the light projection process, the Kalman filtering algorithm is continuously adopted to filter the audio data, so that noise interference is removed, and the accuracy of note identification is improved;
Meanwhile, the self-adaptive gain control algorithm is adopted to adjust the brightness of the lamplight in real time, so that the optimal lamplight effect can be ensured to be displayed under different environment illumination conditions;
through the cyclic iteration of the steps, the real-time synchronization of piano playing and lamplight projection is realized, and an immersive music playing atmosphere is created;
In the whole process, a long-short-term memory neural network algorithm is adopted to learn and memorize historical note data and lamplight control instructions, and the generation strategy of lamplight effect is continuously optimized, so that lamplight projection can be dynamically adapted to the piano playing style, and the artistic expression of playing is improved.
CN202411148520.7A 2024-08-21 2024-08-21 Dynamic light effect control method for piano lamp Pending CN118660369A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411148520.7A CN118660369A (en) 2024-08-21 2024-08-21 Dynamic light effect control method for piano lamp

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411148520.7A CN118660369A (en) 2024-08-21 2024-08-21 Dynamic light effect control method for piano lamp

Publications (1)

Publication Number Publication Date
CN118660369A true CN118660369A (en) 2024-09-17

Family

ID=92705981

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411148520.7A Pending CN118660369A (en) 2024-08-21 2024-08-21 Dynamic light effect control method for piano lamp

Country Status (1)

Country Link
CN (1) CN118660369A (en)

Similar Documents

Publication Publication Date Title
US10789921B2 (en) Audio extraction apparatus, machine learning apparatus and audio reproduction apparatus
US8706274B2 (en) Information processing apparatus, information processing method, and program
KR101602194B1 (en) Music acoustic signal generating system
US11568857B2 (en) Machine learning method, audio source separation apparatus, and electronic instrument
Lehner et al. Online, loudness-invariant vocal detection in mixed music signals
Friberg A fuzzy analyzer of emotional expression in music performance and body motion
CN106383676B (en) Instant photochromic rendering system for sound and application thereof
Chordia Segmentation and Recognition of Tabla Strokes.
Traube et al. Indirect acquisition of instrumental gesture based on signal, physical and perceptual information
US11842720B2 (en) Audio processing method and audio processing system
US20220383842A1 (en) Estimation model construction method, performance analysis method, estimation model construction device, and performance analysis device
US20210366454A1 (en) Sound signal synthesis method, neural network training method, and sound synthesizer
Weinberg et al. Robotic musicianship: embodied artificial creativity and mechatronic musical expression
Hsu Strategies for managing timbre and interaction in automatic improvisation systems
JP2012506061A (en) Analysis method of digital music sound signal
US20210350783A1 (en) Sound signal synthesis method, neural network training method, and sound synthesizer
CN112634841B (en) Guitar music automatic generation method based on voice recognition
WO2017057531A1 (en) Acoustic processing device
CN118660369A (en) Dynamic light effect control method for piano lamp
Li et al. An approach to score following for piano performances with the sustained effect
JP2020021098A (en) Information processing equipment, electronic apparatus, and program
KR101092228B1 (en) System and method for recognizing instrument to classify signal source
KR100381682B1 (en) Song accompaniment method to induce pitch correction
CN116710998A (en) Information processing system, electronic musical instrument, information processing method, and program
CN203165441U (en) Symphony musical instrument

Legal Events

Date Code Title Description
PB01 Publication