CN113963723A - Music presentation method, device, equipment and storage medium - Google Patents
Music presentation method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN113963723A CN113963723A CN202111090442.6A CN202111090442A CN113963723A CN 113963723 A CN113963723 A CN 113963723A CN 202111090442 A CN202111090442 A CN 202111090442A CN 113963723 A CN113963723 A CN 113963723A
- Authority
- CN
- China
- Prior art keywords
- music
- tone
- score
- output
- preset
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B20/00—Signal processing not specific to the method of recording or reproducing; Circuits therefor
- G11B20/10—Digital recording or reproducing
- G11B20/10009—Improvement or modification of read or write signals
- G11B20/10037—A/D conversion, D/A conversion, sampling, slicing and digital quantisation or adjusting parameters thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/165—Management of the audio stream, e.g. setting of volume, audio stream path
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Abstract
The invention provides a music presentation method, a device, equipment and a storage medium, wherein the method comprises the following steps: acquiring a music score of music to be presented according to selection of a user; acquiring syllable information in the music score, wherein the syllable information comprises tone of syllables and/or time of syllables; determining an output frequency value of a tone in the music score according to a preset frequency output table, wherein the preset frequency output table comprises a preset corresponding relation between syllable information and the output frequency value; and sequentially outputting interactive signals corresponding to syllable information in the music score according to a preset output frequency value. The music score of the music to be presented is obtained, the output frequency corresponding to each note in the music score is determined according to the preset note output frequency comparison table, the interactive signals are output according to the sequence of the notes in the music score and the output frequency corresponding to the notes, the music is presented through the interactive signals output according to the preset frequency, the personalized requirements of users on music presentation are met, and the user experience is further improved.
Description
Technical Field
The present invention relates to the field of music playing technologies, and in particular, to a music presentation method, apparatus, device, and storage medium.
Background
Music is taken as an art, people can enjoy the music, many users like listening to the music when the users are idle or in sports, but with the wide application of intelligent terminal products and the continuous enrichment of the personalized demands of the users, the audio and video multimedia processing capacity of the intelligent terminal products becomes an important index for the user experience of the intelligent terminal products. However, the existing music player only has an independent music playing function, mainly audio-visual, and single function, and cannot meet the personalized requirements of users, so that the user experience is low, the user experiences insufficiently, and the playing device cannot be triggered to interactively play and output music signals.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
The invention mainly aims to provide a music presentation method, and aims to solve the technical problem that the music presentation method in the prior art is too single and cannot meet the personalized requirements of users.
To achieve the above object, the present invention provides a music presentation method, including: acquiring a music score of music to be presented according to selection of a user;
acquiring syllable information in the music score, wherein the syllable information comprises tone of syllables and/or time of syllables;
determining an output frequency value of a tone in the music score according to a preset frequency output table, wherein the preset frequency output table comprises a preset corresponding relation between syllable information and the output frequency value;
and sequentially outputting interactive signals corresponding to syllable information in the music score according to a preset output frequency value.
Preferably, the step of obtaining the score of the music to be presented according to the selection of the user comprises:
performing music signal separation on the music to be presented to obtain a plurality of single-channel music signals;
analyzing the plurality of single-channel music signals to obtain music score information of the plurality of single-channel music signals;
determining a target score for the music to be presented based on score information for the plurality of singles.
Preferably, the step of obtaining syllable information in the score, the syllable information including tone of syllable and/or time of syllable comprises:
acquiring notes in the target music score;
determining the musical interval, scale and tone of the target musical score according to the musical notes;
and obtaining the tone of the syllable and/or the time of the syllable in the target music score based on the musical interval, the musical scale and the tone of the target music score.
Preferably, the step of determining an output frequency value of a tone in the musical score according to a preset frequency output table, where the preset frequency output table includes a preset correspondence between syllable information and the output frequency value includes:
setting note output frequencies of various tones in the music score based on the C tone character frequency comparison table;
establishing a conversion relation of corresponding signal output frequencies between the musical notes in the music score and the C tone tuning symbols;
and determining the signal output frequency value of the tone in the music score according to the C tone character frequency comparison table and the conversion relation.
Preferably, the step of determining the signal output frequency value of the tone in the musical score according to the C-tone tuner frequency look-up table and the scaling relationship further includes:
establishing a corresponding relation among a musical interval, a musical scale and a tone and a musical note in the target music score of the music to be presented;
and calculating the signal output frequency value of the tone in the music score according to the corresponding relation.
Preferably, the method further comprises:
and setting the output time length of the signal frequency of each note based on the C tone character frequency comparison table and the conversion relation.
Preferably, the step of sequentially outputting the interactive signals corresponding to the syllable information in the music score according to a preset output frequency value includes:
outputting a signal of vibration and/or light based on the signal output frequency corresponding to each music note to realize interaction touch effect;
and presenting the music to be presented according to the interactive touch signal, the output duration and the audio. Further, in order to achieve the above object, the present invention also provides a music presentation apparatus comprising:
the music score acquisition module is used for acquiring a music score of music to be presented according to the selection of a user;
a note determining module, configured to obtain syllable information in the score, where the syllable information includes a tone of a syllable and/or a time of the syllable;
the frequency determining module is used for determining the output frequency value of the tone in the music score according to a preset frequency output table, and the preset frequency output table comprises preset corresponding relation between syllable information and the output frequency value;
and the signal output module is used for sequentially outputting the interactive signals corresponding to the syllable information in the music score according to a preset output frequency value.
In order to achieve the above object, the present invention also provides an electronic device, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the memory has stored thereon instructions executable by at least one processor to enable the at least one processor to perform the steps of the music presentation method as claimed in any one of the preceding claims.
In addition, in order to achieve the above object, the present invention provides a computer storage medium characterized by comprising: the computer readable storage medium has stored thereon a music presentation program which, when executed by a processor, implements the steps of the music presentation method as described in any one of the above.
The invention provides a music presentation method, which comprises the following steps: acquiring a music score of music to be presented according to selection of a user; acquiring syllable information in the music score, wherein the syllable information comprises tone of syllables and/or time of syllables; determining an output frequency value of a tone in the music score according to a preset frequency output table, wherein the preset frequency output table comprises a preset corresponding relation between syllable information and the output frequency value; and sequentially outputting interactive signals corresponding to syllable information in the music score according to a preset output frequency value. The music interaction signal is output according to the corresponding frequency by obtaining the musical score letter middle note of the music to be presented through the music selected by the user through the preset note frequency conversion table, so that the music to be presented is presented to the user in an innovative way, and the unique and personalized experience requirements of the user are met.
Drawings
FIG. 1 is a schematic diagram of a terminal \ device structure of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a music presentation method according to a first embodiment of the present invention;
FIG. 3 is a flowchart illustrating a music presentation method according to a second embodiment of the present invention;
FIG. 4 is a flowchart illustrating a detailed step of step S20 in FIG. 1 according to the present invention;
FIG. 5 is a flowchart illustrating a music presentation method according to a third embodiment of the present invention;
FIG. 6 is a flowchart illustrating a detailed step of step S303 in FIG. 5;
FIG. 7 is a flowchart illustrating a music presentation method according to a fourth embodiment of the present invention;
fig. 8 is a schematic structural diagram of a music presentation device according to the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The main solution of the embodiment of the invention is as follows: obtaining a music score of the music to be played; obtaining the tone of one or more syllables and/or the time of the syllable in the music score; determining output frequency values of one or more tones in the score according to a preset frequency output table; and when the music to be played is played, sequentially outputting signals of one or more tones in the music score according to a preset output frequency value.
Because the music presenter in the prior art only has an independent music presentation function, mainly audio-visual, and single function, the personalized requirements of users cannot be met, so that the experience of the users is low, the music presenter cannot meet the requirements of playing equipment for interactively outputting music signals, and the personalized requirements of the users are met.
The invention provides a solution, which enables interactive signals to be sequentially output according to the output frequency of preset notes of a music score of music while the music is played, so that the output of the interactive signals is accompanied while the music is presented, the personalized music listening requirement of a user is met, and the experience of the user is improved.
As shown in fig. 1, fig. 1 is a schematic terminal structure diagram of a hardware operating environment according to an embodiment of the present invention.
Those skilled in the art will appreciate that the computer apparatus 300 of the embodiments of the present application is an apparatus as described above for performing one or more of the methods of the present application. These devices may be specially designed and manufactured for the required purposes, or they may comprise known devices in general-purpose computers. These devices have stored therein computer programs 200 or application programs, which computer programs 200 are selectively activated or reconfigured. Such a computer program 200 may be stored in a device (e.g., computer) readable medium, including, but not limited to, any type of disk including floppy disks, hard disks, optical disks, CD-ROMs, and magnetic-optical disks, ROMs (Read-Only memories), RAMs (Random Access memories), EPROMs (Erasable Programmable Read-Only memories), EEPROMs (Electrically Programmable Read-Only memories), flash memories, magnetic cards, or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a bus. That is, a readable medium includes any medium that stores or transmits information in a form readable by a device (e.g., a computer).
Those skilled in the art will appreciate that the terminal structure shown in fig. 1 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, the memory 400, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a music presentation program.
In the terminal shown in fig. 1, the processor 500 may be configured to call the music presentation program stored in the memory 400 and perform the following operations:
obtaining a music score of music to be played;
acquiring syllable information in the music score, wherein the syllable information comprises tone of syllables and/or time of syllables;
determining an output frequency value of a tone in the music score according to a preset frequency output table, wherein the preset frequency output table comprises a preset corresponding relation between syllable information and the output frequency value;
and outputting interactive signals corresponding to syllable information in the music score according to a preset output frequency value in sequence while playing the music to be played.
Further, the processor 500 may call the music presentation program stored in the memory 400, and also perform the following operations:
acquiring a music score of music to be presented according to selection of a user;
acquiring syllable information in the music score, wherein the syllable information comprises tone of syllables and/or time of syllables;
determining an output frequency value of a tone in the music score according to a preset frequency output table, wherein the preset frequency output table comprises a preset corresponding relation between syllable information and the output frequency value;
and sequentially outputting interactive signals corresponding to syllable information in the music score according to a preset output frequency value.
Further, the processor 500 may call the music presentation program stored in the memory 400, and also perform the following operations:
the step of obtaining the music score of the music to be presented according to the selection of the user comprises the following steps:
performing music signal separation on the music to be presented to obtain a plurality of single-channel music signals;
analyzing the plurality of single-channel music signals to obtain music score information of the plurality of single-channel music signals;
determining a target score for the music to be presented based on score information for the plurality of singles.
Further, the processor 500 may call the music presentation program stored in the memory 400, and also perform the following operations:
the step of determining an output frequency value of a tone in the score according to a preset frequency output table, where the preset frequency output table includes a preset correspondence between syllable information and the output frequency value, includes:
setting note output frequencies of various tones in the music score based on the C tone character frequency comparison table;
establishing a conversion relation of corresponding signal output frequencies between the musical notes in the music score and the C tone tuning symbols;
and determining the signal output frequency value of the tone in the music score according to the C tone character frequency comparison table and the conversion relation.
Further, the processor 500 may call the music presentation program stored in the memory 400, and also perform the following operations: the step of determining the signal output frequency value of the tone in the music score according to the C-tone tuner frequency comparison table and the conversion relation further includes:
establishing a corresponding relation among a musical interval, a musical scale and a tone and a musical note in the target music score of the music to be presented;
and calculating the signal output frequency value of the tone in the music score according to the corresponding relation.
Further, the processor 500 may call the music presentation program stored in the memory 400, and also perform the following operations:
the method further comprises the following steps:
and setting the output time length of the signal frequency of each note based on the C tone character frequency comparison table and the conversion relation.
Further, the processor 500 may call the music presentation program stored in the memory 400, and also perform the following operations:
the step of sequentially outputting the interactive signals corresponding to the syllable information in the music score according to the preset output frequency value comprises the following steps:
outputting a signal of vibration and/or light based on the signal output frequency corresponding to each music note to realize interaction touch effect;
and presenting the music to be presented according to the interactive touch signal, the output duration and the audio.
The terminal equipment of the embodiment of the invention can be personal care small household appliances and intelligent wearable electronic products such as electric toothbrushes, electric massagers, electric beauty instruments, electric face washers, electric breast intensifiers, electric hair clippers, electric shavers, electric cups/kettles, sleeping instruments, infrared physiotherapy instruments, eye protection instruments, electric massage combs and the like, and can also be mobile terminal equipment with a display function such as PCs, game controllers, smart phones, tablet computers, electronic book readers, MP3 players, MP4 players, portable computers and the like.
Referring to fig. 2, a first embodiment of a music presentation method according to the present invention provides a music presentation method, including:
step S10, obtaining the music score of the music to be presented according to the selection of the user;
step S20, obtaining syllable information in the music score, wherein the syllable information comprises the tone of the syllable and/or the time of the syllable;
step S30, determining the output frequency value of the tone in the music score according to a preset frequency output table, wherein the preset frequency output table comprises a preset corresponding relation between syllable information and the output frequency value;
and step S40, sequentially outputting interactive signals corresponding to the syllable information in the music score according to a preset output frequency value.
Specifically, in the present embodiment, the music file to be played is obtained according to the selection of the user, and the music file may be obtained from a local storage or may be obtained through a network; and no special requirements are made on the format of the music file in the present embodiment. And obtaining the music score of the music file according to the music file. In general, music scores refer to methods of marking the length and pitch of a music piece with a large number of special symbols, and recording music by other means, which music scores can help users to record and "read" music. Therefore, based on the information recorded on the music score, signals with different frequencies are output through different notes so as to achieve the effect of interactive music presentation; obtaining the tone of one or more syllables or/and the time of the syllables in the music score based on the music score, mapping the corresponding output signal frequency value of one or more syllables in the music score according to a preset tone frequency output table, outputting signals according to the signal output frequency corresponding to one or more syllables in the music score, and outputting interactive signals of the corresponding frequency of note signals of music to be presented selected by a user by the terminal or the bone conduction device in sequence according to the tone in the music score, such as: vibration, light, etc. are used for interacting with the touch time signal. In this embodiment, the terminal device applies the music presentation method provided by the present invention, for example, the electric toothbrush is equipped with the music presentation method provided by the present invention, according to the target music to be presented selected by the user, the MCU chip of the electric toothbrush obtains the music score of the music to be presented, and according to the output frequency table of the interaction signals corresponding to the syllable information in the music score and the preset notes, the electric toothbrush in this embodiment sequentially outputs the interaction signals according to the note sequence in the music score and the preset frequency, and the electric toothbrush in this embodiment can output the note vibration frequency signal corresponding to the music in a vibration manner. In addition, the note frequency signal can be output by other interactive manners, and the invention is not limited in any way.
In the embodiment, a music score of music to be presented is acquired by selection according to a user; acquiring syllable information in the music score, wherein the syllable information comprises tone of syllables and/or time of syllables; determining an output frequency value of a tone in the music score according to a preset frequency output table, wherein the preset frequency output table comprises a preset corresponding relation between syllable information and the output frequency value; and sequentially outputting interactive signals corresponding to syllable information in the music score according to a preset output frequency value. The music selected by the user is converted into the interactive signals to be output, the interactive signals with the corresponding frequencies of the notes are output according to the frequency values of the interactive signals which are correspondingly output by the preset notes and the sequence of the notes in the music score of the music to be played, so that the music to be played is presented in a unique and interactive mode, the personalized requirements of the user are met, and the experience of the user is further improved.
Further, referring to fig. 3, a second embodiment of a music presentation method according to the present invention provides a music presentation method, based on the above embodiment shown in fig. 2, the step S10 further includes:
step S101, performing music signal separation on the music to be presented to obtain a plurality of single-channel music signals;
step S102, analyzing the plurality of single-channel music signals to obtain score information of the plurality of single-channel music signals;
step S103, determining the target music score of the music to be presented based on the music score information of the plurality of single-channel music signals.
Specifically, in this embodiment, if the music to be played triggered by the user is a polyphonic music signal, the polyphonic music signal is subjected to framing processing to obtain a plurality of audio frames, a mute detection is performed on each audio frame to determine whether the audio frame is a mute frame, a multi-fundamental frequency detection is performed on each non-mute frame to obtain note information and a fundamental frequency detection value; carrying out harmonic number and amplitude estimation on different notes to obtain the amplitude and harmonic information of each note and a Bayesian harmonic model, and obtaining a time domain partials signal by using a fundamental frequency estimation value; and synthesizing the time domain sound separation signals frame by frame according to the preset frame shift and frame number to obtain a target music score determined based on the music score information of a plurality of single-path music signals. The multi-tone music signal is separated into multi-channel single-tone music signals, and the tone color, the melody, the rhythm and the beat of each separated single-tone music signal are respectively extracted to obtain multi-channel single-tone music, so that the multi-tone music score with high accuracy is obtained according to the audio information of the multi-channel single-tone music.
In this embodiment, multi-tone music to be played is subjected to framing processing to obtain a plurality of single-channel music signals, a target music score of the music to be played is determined comprehensively according to the plurality of single-channel music signals, and then a frequency basis of an output signal corresponding to the music to be presented is determined according to notes in the target music score.
Further, as shown in fig. 4, based on the first and second embodiments, step S20 in fig. 2 further includes:
step S201, obtaining notes in the target music score;
step S202, determining the musical interval, scale and tone of the target music score according to the musical notes;
step S203, obtaining the time of the tone and/or syllable in the target music score based on the musical interval, the musical scale and the tone of the target music score.
Specifically, in this embodiment, in general, at least one note element is included in a music file, the corresponding output signal frequency is determined according to notes in a target score of music to be played, and the interval, scale and pitch formed by arrangement of notes in the score are obtained, and the pitch and/or time of syllables of one or more syllables in the target score is obtained based on the interval, scale and pitch of the target score. Namely, the output time is set through the output frequency corresponding to each note in the music score corresponding to the music to be played and the interval, scale and tone formed by each note. And finally, combining the output signals in time according to the arrangement sequence of the notes in the music score to present the interactive signals of the music.
Further, referring to fig. 5, a third embodiment of a music presentation method according to the present invention provides a music presentation method, based on the above embodiment shown in fig. 2, the step S30 further includes:
step S301, setting note output frequencies of various tones in the music score based on a C tone character frequency comparison table;
step S302, establishing a conversion relation of corresponding signal output frequencies between the musical notes in the music score and the C tone tuning characters;
step S303, determining a signal output frequency value of a tone in the music score according to the C tuner frequency comparison table and the conversion relation;
and step S304, setting the output time length of the signal frequency of each note based on the C-tone tuner frequency comparison table and the conversion relation.
Specifically, in the present embodiment, music to be presented is output based on the vibration type interactive signal of one of the above-described embodiments. Under rated voltage, the music frequency is set on the basis of the C tone character frequency comparison table, so that the touch sense, the vision and the hearing sense simultaneously feel the vibration frequency and the tone of the music generated by the terminal through a motor/electrode/electromagnetism/sound wave/machinery, and the touch and reach interactive three-dimensional music expression effect is formed; on the basis of a C-tone note and frequency comparison table, according to the relation between various tones and C tones in a music score, setting the corresponding relation values among musical intervals, musical scales, tones and notes in the music into note vibration frequency (0.1-120 times/second), and performing comparison and conversion with the range of effective power (rated frequency) generated by a motor/electrode/electromagnetism/sound wave/machinery one by one to serve as a calling standard vibration frequency value for executing touch interactive music; under rated voltage, the effective power range of the motor (motor)/electrode/electromagnetism/sound wave/machinery is accurately found, standard frequency value conversion is carried out according to various frequency relation formulas, the frequency ratio of the value on the motor (motor)/electrode/electromagnetism/sound wave/machinery is calculated according to the standard note frequency value set on the basis of the C tone tuning character frequency comparison table, the output frequency of the motor (motor)/electrode/electromagnetism/sound wave/machinery is set again according to the ratio, the standard frequency value of touch interactive music is determined, and the touch interactive music is called by a main control or additional chip which outputs the touch interactive music when the motor (motor)/electrode/electromagnetism/sound wave/machinery effectively does work; based on the effective power value range of the motor/electrode/electromagnetism/sound wave/machinery on the terminal under the rated voltage, the occupation and control ratio of the corresponding frequency value and the frequency duration value of the note to the effective power range value (frequency value) of the motor/electrode/electromagnetism/sound wave/machinery is called, and the note frequency value and the frequency duration value which can be output when the motor/electrode/electromagnetism/sound wave/machinery does work are converted; under rated voltage, a main control chip MCU on the circuit board or a chip specially added for realizing touch interaction music drives a motor/electrode/electromagnetism/sound wave/machinery, and the note frequency value which can be output when effective work is done is converted according to the output frequency value mapping relation of the note frequency value and the motor/electrode/electromagnetism/sound wave/machinery, so that touch interaction three-dimensional audio is output.
Referring to fig. 6, step S303 further includes:
step S3031, establishing the corresponding relation among the musical interval, the musical scale and the tone and the musical note in the target music score of the music to be presented;
step S3032, calculating the signal output frequency value of the tone in the music score according to the corresponding relation.
Specifically, in this embodiment, according to the music frequency set based on the C-tone tuner frequency comparison table, the correspondence between the musical score middle range, musical scale and tone corresponding to the music file to be played and the C-tone tuner is established, so as to obtain the signal output frequency value of one or more tones in the musical score.
Further, referring to fig. 7, a fourth embodiment of a music presentation method according to the present invention provides a music presentation method, and based on the above embodiment shown in fig. 2, the step S40 further includes:
step S401, outputting vibration and/or lamplight based on the signal output frequency corresponding to each music note to realize a signal of interaction touch effect;
and S402, presenting the music to be presented according to the interactive touch signal, the output time length and the audio.
Specifically, in this embodiment, a score of the music to be played is obtained; obtaining the tone of one or more syllables and/or the time of the syllable in the music score; determining output frequency values of one or more tones in the score according to a preset frequency output table; and sequentially outputting signals of one or more tones in the music score according to a preset output frequency value. The music is presented by determining interactive signals to be output according to the notes in the music score of the music to be played and outputting corresponding interactive signals according to the sequence of the notes in the music score and the preset signal output frequency. The output interactive signal may be a vibration and/or light signal which may represent a touch interactive signal. The invention is not limited in any way herein.
In addition, referring to fig. 8, an embodiment of the present invention further provides a music presentation apparatus, including:
a music score obtaining module 10, configured to obtain a music score of music to be presented according to a selection of a user;
a note determining module 20, configured to obtain syllable information in the score, where the syllable information includes a tone of a syllable and/or a time of the syllable;
a frequency determining module 30, configured to determine an output frequency value of a tone in the musical score according to a preset frequency output table, where the preset frequency output table includes a preset corresponding relationship between syllable information and the output frequency value;
and the signal output module 40 is configured to output the interactive signals corresponding to the syllable information in the music score in sequence according to a preset output frequency value.
In addition, an embodiment of the present invention further provides an electronic device, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the memory has stored thereon instructions executable by at least one processor to enable the at least one processor to perform the steps of the music presentation method as claimed in any one of the above embodiments.
Furthermore, an embodiment of the present invention further provides a computer-readable storage medium, where a music presentation program is stored on the computer-readable storage medium, and when executed by a processor, the music presentation program implements the following operations:
acquiring a music score of music to be presented according to selection of a user;
acquiring syllable information in the music score, wherein the syllable information comprises tone of syllables and/or time of syllables;
determining an output frequency value of a tone in the music score according to a preset frequency output table, wherein the preset frequency output table comprises a preset corresponding relation between syllable information and the output frequency value;
and sequentially outputting interactive signals corresponding to syllable information in the music score according to a preset output frequency value.
Further, the music presentation program when executed by the processor further performs the following operations:
the step of obtaining the music score of the music to be presented according to the selection of the user comprises the following steps:
performing music signal separation on the music to be presented to obtain a plurality of single-channel music signals;
analyzing the plurality of single-channel music signals to obtain music score information of the plurality of single-channel music signals;
determining a target score for the music to be presented based on score information for the plurality of singles.
Further, the music presentation program when executed by the processor further performs the following operations:
the step of obtaining syllable information in the score, wherein the syllable information comprises tone of syllable and/or time of syllable, comprises:
acquiring notes in the target music score;
determining the musical interval, scale and tone of the target musical score according to the musical notes;
and obtaining the tone of the syllable and/or the time of the syllable in the target music score based on the musical interval, the musical scale and the tone of the target music score.
Further, the music presentation program when executed by the processor further performs the following operations:
the step of determining an output frequency value of a tone in the score according to a preset frequency output table, where the preset frequency output table includes a preset correspondence between syllable information and the output frequency value, includes:
setting note output frequencies of various tones in the music score based on the C tone character frequency comparison table;
establishing a conversion relation of corresponding signal output frequencies between the musical notes in the music score and the C tone tuning symbols;
and determining the signal output frequency value of the tone in the music score according to the C tone character frequency comparison table and the conversion relation.
Further, the music presentation program when executed by the processor further performs the following operations:
the step of determining the signal output frequency value of the tone in the music score according to the C-tone tuner frequency comparison table and the conversion relation further includes:
establishing a corresponding relation among a musical interval, a musical scale and a tone and a musical note in the target music score of the music to be presented;
and calculating the signal output frequency value of the tone in the music score according to the corresponding relation.
Further, the music presentation program when executed by the processor further performs the following operations:
the method further comprises the following steps:
and setting the output time length of the signal frequency of each note based on the C tone character frequency comparison table and the conversion relation.
Further, the music presentation program when executed by the processor further performs the following operations:
the step of sequentially outputting the interactive signals corresponding to the syllable information in the music score according to the preset output frequency value comprises the following steps:
outputting a signal of vibration and/or light based on the signal output frequency corresponding to each music note to realize interaction touch effect;
and presenting the music to be presented according to the interactive touch signal, the output duration and the audio.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.
Claims (10)
1. A music presentation method, comprising:
acquiring a music score of music to be presented according to selection of a user;
acquiring syllable information in the music score, wherein the syllable information comprises tone of syllables and/or time of syllables;
determining an output frequency value of a tone in the music score according to a preset frequency output table, wherein the preset frequency output table comprises a preset corresponding relation between syllable information and the output frequency value;
and sequentially outputting interactive signals corresponding to syllable information in the music score according to a preset output frequency value.
2. The music presentation method according to claim 1, wherein the step of obtaining a score of music to be presented according to a user's selection comprises:
performing music signal separation on the music to be presented to obtain a plurality of single-channel music signals;
analyzing the plurality of single-channel music signals to obtain music score information of the plurality of single-channel music signals;
determining a target score for the music to be presented based on score information for the plurality of singles.
3. The music presentation method of claim 2, wherein the step of obtaining syllable information in the score, the syllable information comprising tone of syllable and/or time of syllable comprises:
acquiring notes in the target music score;
determining the musical interval, scale and tone of the target musical score according to the musical notes;
and obtaining the tone of the syllable and/or the time of the syllable in the target music score based on the musical interval, the musical scale and the tone of the target music score.
4. The music presentation method of claim 1, wherein the step of determining the output frequency value of the tone in the score according to a preset frequency output table, the preset frequency output table comprising preset correspondences of syllable information to output frequency values comprises:
setting note output frequencies of various tones in the music score based on the C tone character frequency comparison table;
establishing a conversion relation of corresponding signal output frequencies between the musical notes in the music score and the C tone tuning symbols;
and determining the signal output frequency value of the tone in the music score according to the C tone character frequency comparison table and the conversion relation.
5. The music presentation method of claim 4, wherein the step of determining the signal output frequency value of the tone in the score from the C-tone frequency lookup table and the scaling relationship further comprises:
establishing a corresponding relation among a musical interval, a musical scale and a tone and a musical note in the target music score of the music to be presented;
and calculating the signal output frequency value of the tone in the music score according to the corresponding relation.
6. The music presentation method of claim 4, wherein the method further comprises:
and setting the output time length of the signal frequency of each note based on the C tone character frequency comparison table and the conversion relation.
7. The music presentation method according to any one of claims 1 to 6, wherein said step of sequentially outputting interactive signals corresponding to syllable information in said score according to a preset output frequency value comprises:
outputting a signal of vibration and/or light based on the signal output frequency corresponding to each music note to realize interaction touch effect;
and presenting the music to be presented according to the interactive touch signal, the output duration and the audio.
8. A music presentation device, comprising:
the music score acquisition module is used for acquiring a music score of music to be presented according to the selection of a user;
a note determining module, configured to obtain syllable information in the score, where the syllable information includes a tone of a syllable and/or a time of the syllable;
the frequency determining module is used for determining the output frequency value of the tone in the music score according to a preset frequency output table, and the preset frequency output table comprises preset corresponding relation between syllable information and the output frequency value;
and the signal output module is used for sequentially outputting the interactive signals corresponding to the syllable information in the music score according to a preset output frequency value.
9. An electronic device, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the memory has stored thereon instructions executable by at least one processor to enable the at least one processor to perform the steps of the music presentation method of any one of claims 1 to 7.
10. A computer storage medium, comprising: the computer readable storage medium has stored thereon a music presentation program which, when executed by a processor, implements the steps of the music presentation method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111090442.6A CN113963723B (en) | 2021-09-16 | 2021-09-16 | Music presentation method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111090442.6A CN113963723B (en) | 2021-09-16 | 2021-09-16 | Music presentation method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113963723A true CN113963723A (en) | 2022-01-21 |
CN113963723B CN113963723B (en) | 2023-05-26 |
Family
ID=79461924
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111090442.6A Active CN113963723B (en) | 2021-09-16 | 2021-09-16 | Music presentation method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113963723B (en) |
Citations (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH1165560A (en) * | 1997-08-13 | 1999-03-09 | Giatsuto:Kk | Music score generating device by computer |
JP2000295681A (en) * | 1999-04-09 | 2000-10-20 | Mega House:Kk | Musical tone vibration imparting device |
JP2004186831A (en) * | 2002-11-29 | 2004-07-02 | Alps Electric Co Ltd | Vibration generator |
US20070012165A1 (en) * | 2005-07-18 | 2007-01-18 | Samsung Electronics Co., Ltd. | Method and apparatus for outputting audio data and musical score image |
CN101197970A (en) * | 2007-12-28 | 2008-06-11 | 熊猫电子集团有限公司 | Method for implementing music DIY on television |
CN101252801A (en) * | 2008-03-21 | 2008-08-27 | 中兴通讯股份有限公司 | Method and apparatus for controlling light |
JP2010224430A (en) * | 2009-03-25 | 2010-10-07 | Hachinohe Institute Of Technology | Automatic music collection device, scale identification program, scale discrimination program, electric traditional stringed musical instrument music automatic collection system, and electric shamisen music automatic collection system |
CN102587233A (en) * | 2011-01-14 | 2012-07-18 | 交通运输部公路科学研究所 | Pavement music generating method |
CN103854644A (en) * | 2012-12-05 | 2014-06-11 | 中国传媒大学 | Automatic duplicating method and device for single track polyphonic music signals |
CN103932867A (en) * | 2014-05-12 | 2014-07-23 | 江本旋 | Music-driven vibration body builder |
CN104580692A (en) * | 2014-12-02 | 2015-04-29 | 广东欧珀移动通信有限公司 | Method and device for enabling mobile phone to dance with music |
CN104778467A (en) * | 2015-02-12 | 2015-07-15 | 北京邮电大学 | Automatic musicofasong photographing and playing system |
CN104811538A (en) * | 2015-03-26 | 2015-07-29 | 努比亚技术有限公司 | Audio-based breathing lamp control method and device |
CN204709649U (en) * | 2015-06-05 | 2015-10-21 | 哈尔滨市华宇医用电子仪器有限公司 | Music electric energy fourteen channels therapeutic instrument |
US20160047076A1 (en) * | 2013-04-09 | 2016-02-18 | Haier Group Corporation | Music washing machine and control method thereof |
CN105635912A (en) * | 2016-01-29 | 2016-06-01 | 深圳市因为科技有限公司 | Luminous device and acousto-optic combination device |
CN105763293A (en) * | 2014-12-19 | 2016-07-13 | 北京奇虎科技有限公司 | Method and system for playing music in sound wave-based data transmission |
CN106782460A (en) * | 2016-12-26 | 2017-05-31 | 广州酷狗计算机科技有限公司 | The method and apparatus for generating music score |
CN107393565A (en) * | 2016-05-16 | 2017-11-24 | 青岛海尔洗衣机有限公司 | A kind of method and washing machine using motor sounding |
CN206676630U (en) * | 2017-04-24 | 2017-11-28 | 漳州仂元工业有限公司 | A kind of Miniature music fountain |
US20180349495A1 (en) * | 2016-05-04 | 2018-12-06 | Tencent Technology (Shenzhen) Company Limited | Audio data processing method and apparatus, and computer storage medium |
CN109692400A (en) * | 2018-12-29 | 2019-04-30 | 深圳市腾迪医疗科技有限公司 | A kind of cell electrodes pulse needle treating instrument |
CN112162721A (en) * | 2020-09-10 | 2021-01-01 | 珠海市魅族科技有限公司 | Music playing method and device, electronic equipment and storage medium |
CN112382255A (en) * | 2020-10-26 | 2021-02-19 | 清能德创电气技术(北京)有限公司 | Music playing method based on alternating current servo system |
WO2021101665A1 (en) * | 2019-11-22 | 2021-05-27 | Microsoft Technology Licensing, Llc | Singing voice synthesis |
-
2021
- 2021-09-16 CN CN202111090442.6A patent/CN113963723B/en active Active
Patent Citations (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH1165560A (en) * | 1997-08-13 | 1999-03-09 | Giatsuto:Kk | Music score generating device by computer |
JP2000295681A (en) * | 1999-04-09 | 2000-10-20 | Mega House:Kk | Musical tone vibration imparting device |
JP2004186831A (en) * | 2002-11-29 | 2004-07-02 | Alps Electric Co Ltd | Vibration generator |
US20070012165A1 (en) * | 2005-07-18 | 2007-01-18 | Samsung Electronics Co., Ltd. | Method and apparatus for outputting audio data and musical score image |
EP1746576A2 (en) * | 2005-07-18 | 2007-01-24 | Samsung Electronics Co., Ltd. | Method and apparatus for outputting audio data and musical score image |
CN101197970A (en) * | 2007-12-28 | 2008-06-11 | 熊猫电子集团有限公司 | Method for implementing music DIY on television |
CN101252801A (en) * | 2008-03-21 | 2008-08-27 | 中兴通讯股份有限公司 | Method and apparatus for controlling light |
JP2010224430A (en) * | 2009-03-25 | 2010-10-07 | Hachinohe Institute Of Technology | Automatic music collection device, scale identification program, scale discrimination program, electric traditional stringed musical instrument music automatic collection system, and electric shamisen music automatic collection system |
CN102587233A (en) * | 2011-01-14 | 2012-07-18 | 交通运输部公路科学研究所 | Pavement music generating method |
CN103854644A (en) * | 2012-12-05 | 2014-06-11 | 中国传媒大学 | Automatic duplicating method and device for single track polyphonic music signals |
US20160047076A1 (en) * | 2013-04-09 | 2016-02-18 | Haier Group Corporation | Music washing machine and control method thereof |
CN103932867A (en) * | 2014-05-12 | 2014-07-23 | 江本旋 | Music-driven vibration body builder |
CN104580692A (en) * | 2014-12-02 | 2015-04-29 | 广东欧珀移动通信有限公司 | Method and device for enabling mobile phone to dance with music |
CN105763293A (en) * | 2014-12-19 | 2016-07-13 | 北京奇虎科技有限公司 | Method and system for playing music in sound wave-based data transmission |
CN104778467A (en) * | 2015-02-12 | 2015-07-15 | 北京邮电大学 | Automatic musicofasong photographing and playing system |
CN104811538A (en) * | 2015-03-26 | 2015-07-29 | 努比亚技术有限公司 | Audio-based breathing lamp control method and device |
CN204709649U (en) * | 2015-06-05 | 2015-10-21 | 哈尔滨市华宇医用电子仪器有限公司 | Music electric energy fourteen channels therapeutic instrument |
CN105635912A (en) * | 2016-01-29 | 2016-06-01 | 深圳市因为科技有限公司 | Luminous device and acousto-optic combination device |
US20180349495A1 (en) * | 2016-05-04 | 2018-12-06 | Tencent Technology (Shenzhen) Company Limited | Audio data processing method and apparatus, and computer storage medium |
CN107393565A (en) * | 2016-05-16 | 2017-11-24 | 青岛海尔洗衣机有限公司 | A kind of method and washing machine using motor sounding |
CN106782460A (en) * | 2016-12-26 | 2017-05-31 | 广州酷狗计算机科技有限公司 | The method and apparatus for generating music score |
CN206676630U (en) * | 2017-04-24 | 2017-11-28 | 漳州仂元工业有限公司 | A kind of Miniature music fountain |
CN109692400A (en) * | 2018-12-29 | 2019-04-30 | 深圳市腾迪医疗科技有限公司 | A kind of cell electrodes pulse needle treating instrument |
WO2021101665A1 (en) * | 2019-11-22 | 2021-05-27 | Microsoft Technology Licensing, Llc | Singing voice synthesis |
CN112162721A (en) * | 2020-09-10 | 2021-01-01 | 珠海市魅族科技有限公司 | Music playing method and device, electronic equipment and storage medium |
CN112382255A (en) * | 2020-10-26 | 2021-02-19 | 清能德创电气技术(北京)有限公司 | Music playing method based on alternating current servo system |
Non-Patent Citations (1)
Title |
---|
王丽君;李萌;: "基于FPGA的简易电子琴设计", 电子科技 * |
Also Published As
Publication number | Publication date |
---|---|
CN113963723B (en) | 2023-05-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9078065B2 (en) | System and method for displaying sound as vibrations | |
JP6361872B2 (en) | Vibration generation system, vibration generation device, vibration signal generation program, and vibration generation method | |
CN107393569A (en) | Audio frequency and video clipping method and device | |
CN104219570B (en) | Audio signal playing method and device | |
CN110215607A (en) | Massage method and device based on electro photoluminescence | |
CN108831437A (en) | A kind of song generation method, device, terminal and storage medium | |
CN109348274A (en) | A kind of living broadcast interactive method, apparatus and storage medium | |
CN107705776A (en) | The System and method for that a kind of intelligent piano or so keyboard subregion uses | |
CN104505103B (en) | Voice quality assessment equipment, method and system | |
CN107371075A (en) | Microphone | |
CN109243479A (en) | Acoustic signal processing method, device, electronic equipment and storage medium | |
WO2020140552A1 (en) | Haptic feedback method | |
CN113963723B (en) | Music presentation method, device, equipment and storage medium | |
CN107248406A (en) | A kind of method and device for automatically generating terrible domestic animals song | |
CN111831250A (en) | Audio processing method and device, storage medium and electronic equipment | |
CN110491355A (en) | A kind of electronic organ plays practice interactive system and electronic organ | |
Weber et al. | Towards a framework for ubiquitous audio-tactile design | |
US20230353800A1 (en) | Cheering support method, cheering support apparatus, and program | |
CN108305605A (en) | Human-computer interaction digital music instruments system based on computer phoneme video | |
WO2011147015A1 (en) | System and method for displaying sound as vibrations | |
WO2010084830A1 (en) | Voice processing device, chat system, voice processing method, information storage medium, and program | |
JP6115932B2 (en) | Sound generating apparatus and sound generating program | |
CN111201799A (en) | Microphone, sound processing system, and sound processing method | |
WO2023051651A1 (en) | Music generation method and apparatus, device, storage medium, and program | |
Balandra et al. | Selective listening attention enhancement, using a simultaneous visual and haptic stimuli |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |