CN112420006A - Method and device for operating simulated musical instrument assembly, storage medium and computer equipment - Google Patents

Method and device for operating simulated musical instrument assembly, storage medium and computer equipment Download PDF

Info

Publication number
CN112420006A
CN112420006A CN202011192393.2A CN202011192393A CN112420006A CN 112420006 A CN112420006 A CN 112420006A CN 202011192393 A CN202011192393 A CN 202011192393A CN 112420006 A CN112420006 A CN 112420006A
Authority
CN
China
Prior art keywords
sound source
information
target sound
target
touch
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011192393.2A
Other languages
Chinese (zh)
Other versions
CN112420006B (en
Inventor
高明飞
兰云柯
余婉
李悦华
张云彦
刘彦麟
曾浩强
马壮
张雨欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Yake Interactive Technology Co ltd
Original Assignee
Tianjin Yake Interactive Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Yake Interactive Technology Co ltd filed Critical Tianjin Yake Interactive Technology Co ltd
Priority to CN202011192393.2A priority Critical patent/CN112420006B/en
Publication of CN112420006A publication Critical patent/CN112420006A/en
Application granted granted Critical
Publication of CN112420006B publication Critical patent/CN112420006B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0008Associated control or indicating means
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/02Means for controlling the tone frequencies, e.g. attack or decay; Means for producing special musical effects, e.g. vibratos or glissandos
    • G10H1/04Means for controlling the tone frequencies, e.g. attack or decay; Means for producing special musical effects, e.g. vibratos or glissandos by additional modulation
    • G10H1/053Means for controlling the tone frequencies, e.g. attack or decay; Means for producing special musical effects, e.g. vibratos or glissandos by additional modulation during execution only
    • G10H1/055Means for controlling the tone frequencies, e.g. attack or decay; Means for producing special musical effects, e.g. vibratos or glissandos by additional modulation during execution only by switches with variable impedance elements
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/32Constructional details
    • G10H1/34Switch arrangements, e.g. keyboards or mechanical switches specially adapted for electrophonic musical instruments

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Auxiliary Devices For Music (AREA)
  • Electrophonic Musical Instruments (AREA)

Abstract

The application discloses a method and a device for operating a simulated musical instrument assembly, a storage medium and computer equipment, wherein the method comprises the following steps: acquiring performance trigger information generated by a user through triggering operation of an equipment sensor of a simulated musical instrument component; determining target tone characteristic information corresponding to the playing trigger information, wherein the target tone characteristic information comprises target tone time information, target tone frequency information and target tone amplitude information; and determining and playing a target sound source based on a sample sound source file and the target sound characteristic information, wherein the sample sound source file comprises at least one sample sound source and sample sound frequency information corresponding to each sample sound source. The method and the device can not only reflect the pitch of the playing sound, but also reflect the loudness to show the playing manipulation of a player, so that the simulation effect is closer to the playing effect of a real musical instrument, the data volume of the sample sound source file is reduced, and the memory resource is saved.

Description

Method and device for operating simulated musical instrument assembly, storage medium and computer equipment
Technical Field
The present application relates to the field of communications technologies, and in particular, to a method and an apparatus for operating a simulated musical instrument component, a storage medium, and a computer device.
Background
The conventional musical instrument is common in life, and the sounding body of the conventional musical instrument is basically a string, a membrane, a spring, a plate or a metal body and the like. Musical instruments such as pianos, violins, urheens and the like make sounds by string vibration, drums make sounds by drum vibration, and gongs make sounds by metal body vibration. However, the operation of the conventional musical instrument is complicated and difficult to get.
In the related art, along with the popularization of smart phones, an application that a musical instrument application program is installed on a smart phone to simulate a musical instrument appears, the musical instrument application program can realize the functions of a part of musical instruments by means of a touch display screen of the smart phone, however, at present, the musical instrument application program installed on the smart phone can only identify the touch position of a user on the touch screen, and identify and determine corresponding musical notes according to the touch position to realize playing, but because the playing levels of players are different, the playing fingering is greatly different, and the differences cannot be accurately identified in the prior art, so that the simulation effect is monotonous and far from the original characteristics of the musical instrument.
Disclosure of Invention
According to one aspect of the present application, there is provided a method of running a simulated musical instrument assembly for a game client, comprising:
acquiring performance trigger information generated by a user through a trigger operation on a device sensor of a simulated musical instrument component provided by the game client;
determining target tone characteristic information corresponding to the playing trigger information, wherein the target tone characteristic information comprises target tone time information, target tone frequency information and target tone amplitude information;
and determining and playing a target sound source based on a sample sound source file and the target sound characteristic information, wherein the sample sound source file comprises at least one sample sound source and sample sound frequency information corresponding to each sample sound source.
Optionally, the determining a target sound source based on the sample sound source file and the target sound feature information specifically includes:
inquiring whether a target sample sound source matched with the target sound frequency information exists in the sample sound source file;
if the target sound source exists, determining the target sound source based on the target sample sound source, the target sound amplitude information and the target sound time information;
if not, carrying out tone modification treatment on the sample sound source according to preset sound source tone modification rules and the target sound frequency information, and generating the target sound source by combining the target sound amplitude information and the target sound time information, wherein the preset sound source tone modification rules at least comprise tone modification rules matched with twelve equal temperaments.
Optionally, the simulated musical instrument is of a first musical instrument type, the device sensor includes a touch sensor corresponding to the touch identification area, and the performance trigger information includes touch operation information;
the acquiring of the performance trigger information generated by the user through the triggering operation of the device sensor of the simulated musical instrument component specifically includes:
receiving a touch sensing signal corresponding to the touch sensor, and analyzing the touch sensing signal to determine touch operation information, wherein the touch operation information includes a touch position, a touch force and a touch time, the target audio frequency information is matched with the touch position, the target audio amplitude information is matched with the touch force, and the target audio time information is matched with the touch time.
Optionally, the target tone feature information further includes touch gesture information matched with the touch position and the touch time, where the touch gesture information at least includes a slide gesture and a click gesture.
Optionally, the simulated musical instrument is of a second musical instrument type, the device sensor includes a touch sensor corresponding to the touch identification area and a sound sensor corresponding to the sound wave collection position, and the playing trigger information includes touch operation information and sound wave input information;
the acquiring of the performance trigger information generated by the user through the triggering operation of the device sensor of the simulated musical instrument component specifically includes:
receiving a touch sensing signal corresponding to the touch sensor and an acoustic wave signal corresponding to the acoustic sensor;
analyzing the touch sensing signal to determine touch operation information, and analyzing the sound wave signal to determine sound wave input information, wherein the touch operation information comprises a touch position and touch time, the sound wave input information comprises a sound wave peak value, the target sound frequency information is matched with the touch position, the target sound amplitude information is matched with the sound wave peak value, and the target sound time information is matched with the touch time.
Optionally, before acquiring performance trigger information generated by a user through a trigger operation on a device sensor simulating a musical instrument component, the method further includes:
receiving a broadcast performance instruction;
after determining the target audio source, the method further comprises:
generating a target sound source file based on the target sound source and the playing duration corresponding to the target sound source;
and when the playing time length corresponding to the target sound source file exceeds a first preset time length, sending the target sound source file to a preset server, so that the preset server generates a broadcast sound source file corresponding to a second preset time length based on the target sound source file and sends the broadcast sound source file to a broadcast object terminal, wherein the first preset time length is greater than the second preset time length.
Optionally, after receiving the broadcast sound source file, the broadcast object terminal determines the broadcast time of the broadcast sound source file according to the target sound time information corresponding to the target sound source in the broadcast sound source file and a third preset time length, and plays the broadcast sound source file according to the broadcast time.
Optionally, before acquiring performance trigger information generated by a user through a trigger operation on a device sensor simulating a musical instrument component, the method further includes:
receiving a recording playing instruction;
after determining the target audio source, the method further comprises:
generating a target sound source file based on the target sound source, storing and sending the target sound source file to a preset server;
responding to a playback triggering instruction, and playing based on the local target sound source file or the target sound source file acquired from the preset server.
Optionally, the target sound characteristic information includes a plurality of target sound characteristics, and the plurality of target sound characteristics correspond to the plurality of target sound sources; the determining the target sound source specifically further includes:
acquiring first target sound time information corresponding to a first target sound source in the target sound sources, and determining a play time offset based on the first target sound time information and the current time, wherein the play time offset is used for determining play time corresponding to the target sound source in the broadcast sound source file and is used for responding to the playback trigger instruction to determine play time corresponding to the target sound source in the target sound source file.
Optionally, before acquiring performance trigger information generated by a user through a trigger operation on a device sensor simulating a musical instrument component, the method further includes:
receiving a follow-up performance instruction;
after determining the target audio source, the method further comprises:
generating and saving a target sound source file based on the target sound source;
responding to a following triggering instruction, and analyzing a plurality of following note information corresponding to the target sound source file, wherein each following note information comprises a following position, a following strength, a following time and a following gesture;
outputting the note prompt information corresponding to the following note information one by one based on the following note information, and playing the sound source corresponding to the following note information in the target sound source file after receiving the operation feedback corresponding to the note prompt information.
Optionally, the method further comprises:
responding to a follow-up playing instruction of a preset sound source file, and analyzing a plurality of follow-up note information corresponding to the preset sound source file, wherein each follow-up note information comprises a follow-up position, follow-up force, follow-up time and a follow-up gesture;
outputting the note prompt information corresponding to the following note information one by one based on the following note information, and playing the sound source corresponding to the following note information in the preset sound source file after receiving the operation feedback corresponding to the note prompt information.
According to another aspect of the present application, there is provided an apparatus for running a simulated musical instrument assembly for a game client, comprising:
a performance information receiving module for acquiring performance trigger information generated by a user through a trigger operation with an apparatus sensor of a simulated musical instrument component provided by the game client;
the characteristic information acquisition module is used for determining target sound characteristic information corresponding to the playing trigger information, wherein the target sound characteristic information comprises target sound time information, target sound frequency information and target sound amplitude information;
and the target sound source determining module is used for determining and playing a target sound source based on a sample sound source file and the target sound characteristic information, wherein the sample sound source file comprises at least one sample sound source and sample sound frequency information corresponding to each sample sound source.
Optionally, the target sound source determining module specifically includes:
the sample sound source matching unit is used for inquiring whether a target sample sound source matched with the target sound frequency information exists in the sample sound source file;
a target sound source determination unit, configured to determine the target sound source based on the target sample sound source, the target sound amplitude information, and the target sound time information if the target sample sound source exists;
and the sample sound source tone changing unit is used for carrying out tone changing processing on the sample sound source according to a preset sound source tone changing rule and the target sound frequency information if the sample sound source tone changing unit does not exist, and generating the target sound source by combining the target sound amplitude information and the target sound time information, wherein the preset sound source tone changing rule at least comprises a tone changing rule matched with twelve equal temperaments.
Optionally, the simulated musical instrument is of a first musical instrument type, the device sensor includes a touch sensor corresponding to the touch identification area, and the performance trigger information includes touch operation information;
the performance information receiving module specifically includes:
the first performance information analyzing unit is configured to receive a touch sensing signal corresponding to the touch sensor, and analyze the touch sensing signal to determine touch operation information, where the touch operation information includes a touch position, a touch force, and a touch time, the target tone frequency information is matched with the touch position, the target tone amplitude information is matched with the touch force, and the target tone time information is matched with the touch time.
Optionally, the target tone feature information further includes touch gesture information matched with the touch position and the touch time, where the touch gesture information at least includes a slide gesture and a click gesture.
Optionally, the simulated musical instrument is of a second musical instrument type, the device sensor includes a touch sensor corresponding to the touch identification area and a sound sensor corresponding to the sound wave collection position, and the playing trigger information includes touch operation information and sound wave input information;
the performance information receiving module specifically includes:
the signal receiving unit is used for receiving a touch sensing signal corresponding to the touch sensor and a sound wave signal corresponding to the sound sensor;
the second performance information analyzing unit is used for analyzing the touch sensing signal to determine touch operation information and analyzing the sound wave signal to determine sound wave input information, wherein the touch operation information comprises a touch position and touch time, the sound wave input information comprises a sound wave peak value, the target sound frequency information is matched with the touch position, the target sound amplitude information is matched with the sound wave peak value, and the target sound time information is matched with the touch time.
Optionally, the apparatus further comprises:
a broadcast instruction receiving module for receiving a broadcast performance instruction before acquiring performance trigger information generated by a user through a trigger operation on a device sensor simulating a musical instrument component;
the first file generation module is used for generating a target sound source file based on the target sound source and the playing time corresponding to the target sound source after the target sound source is determined;
the first file sending module is used for sending the target sound source file to a preset server when the playing time corresponding to the target sound source file exceeds a first preset time, so that the preset server generates a broadcast sound source file corresponding to a second preset time based on the target sound source file and sends the broadcast sound source file to a broadcast object terminal, and the first preset time is longer than the second preset time.
Optionally, after receiving the broadcast sound source file, the broadcast object terminal determines the broadcast time of the broadcast sound source file according to the target sound time information corresponding to the target sound source in the broadcast sound source file and a third preset time length, and plays the broadcast sound source file according to the broadcast time.
Optionally, the apparatus further comprises:
the recording instruction receiving module is used for receiving a recording playing instruction before acquiring playing triggering information generated by triggering operation of a device sensor of the simulated musical instrument assembly by a user;
the second file generation module is used for generating a target sound source file based on the target sound source after the target sound source is determined, storing the target sound source file and sending the target sound source file to a preset server;
and the target sound source playback module is used for responding to a playback trigger instruction and playing the target sound source file based on the local target sound source file or the target sound source file acquired from the preset server.
Optionally, the target sound characteristic information includes a plurality of target sound characteristics, and the plurality of target sound characteristics correspond to the plurality of target sound sources; the target sound source determining module specifically further includes:
an offset time determining unit, configured to obtain first target sound time information corresponding to a first target sound source in the target sound sources, and determine a play time offset based on the first target sound time information and a current time, where the play time offset is used to determine a play time corresponding to the target sound source in the broadcast sound source file and is used to determine a play time corresponding to the target sound source in the target sound source file in response to the playback trigger instruction.
Optionally, the apparatus further comprises:
a following instruction receiving module for receiving a following performance instruction before acquiring performance trigger information generated by a user through trigger operation of a device sensor simulating a musical instrument component;
the third file generation module is used for generating and storing a target sound source file based on the target sound source after the target sound source is determined;
the first following note analysis module is used for responding to a following triggering instruction and analyzing a plurality of following note information corresponding to the target sound source file, wherein each following note information comprises a following position, a following strength, a following time and a following gesture;
and the first following sound source playing module is used for outputting the note prompt information corresponding to the following note information one by one based on the following note information, and playing the sound source corresponding to the following note information in the target sound source file after receiving the operation feedback corresponding to the note prompt information.
Optionally, the apparatus further comprises:
the second following note analysis module is used for responding to a following playing instruction of a preset sound source file and analyzing a plurality of following note information corresponding to the preset sound source file, wherein each piece of following note information comprises a following position, a following strength, a following time and a following gesture;
and the second following sound source playing module is used for outputting the note prompt information corresponding to the following note information one by one based on the following note information, and playing the sound source corresponding to the following note information in the preset sound source file after receiving the operation feedback corresponding to the note prompt information.
According to yet another aspect of the present application, there is provided a storage medium having stored thereon a computer program which, when executed by a processor, implements the above-described method of operating a simulated musical instrument assembly.
According to yet another aspect of the present application, there is provided a computer device comprising a storage medium, a processor and a computer program stored on the storage medium and executable on the processor, the processor implementing the above method of operating a simulated musical instrument assembly when executing the program.
By means of the technical scheme, the method and the device for operating the simulated musical instrument assembly, the storage medium and the computer device, which are provided by the application, are used for acquiring performance trigger information generated by a user through trigger operation on a sensor of the simulated musical instrument device, and determining target sound characteristic information corresponding to sound simulated and played by the user according to the performance trigger information, wherein the target sound characteristic information comprises target sound time information used for expressing target sound generation time, target sound frequency information used for expressing target sound height and target sound amplitude information used for expressing target sound loudness, and further determining a target sound source and playing the target sound source based on the target sound characteristic information and a sample sound source in a pre-established sample sound source file. Compared with the mode of determining the one-to-one corresponding target sound source by means of the touch position in the prior art, the method and the device for determining the target sound source can determine the target sound frequency and the target sound amplitude based on the performance trigger information generated by the user through the trigger operation of the device sensor, thereby determining the target sound source by combining a small amount of sample sound sources, enabling the target sound source to reflect the target sound pitch and the target sound volume, showing the performance technique of a player, enabling the simulation effect to be closer to the performance effect of a real musical instrument, reducing the data volume of the sample sound source files, and saving memory resources.
The foregoing description is only an overview of the technical solutions of the present application, and the present application can be implemented according to the content of the description in order to make the technical means of the present application more clearly understood, and the following detailed description of the present application is given in order to make the above and other objects, features, and advantages of the present application more clearly understandable.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a flow chart illustrating a method of operating a simulated musical instrument assembly according to an embodiment of the present application;
FIG. 2 is a schematic diagram illustrating a simulated playing interface of a simulated drum according to an embodiment of the present application;
FIG. 3 illustrates an overview of a twelve-tone equal temperament graph;
FIG. 4 is a schematic diagram illustrating an exemplary simulated playing operation of a simulated stringed musical instrument according to an embodiment of the present application;
fig. 5 is a schematic diagram illustrating a simulation playing interface of a simulation Xun provided by an embodiment of the present application;
FIG. 6 is a flow chart illustrating another method for operating a simulated musical instrument assembly according to an embodiment of the present application;
fig. 7 is a flowchart illustrating a simulation playing method of a broadcast scene according to an embodiment of the present application;
fig. 8 is a schematic flow chart illustrating a recording and playback performance method according to an embodiment of the present application;
fig. 9 is a schematic flow chart illustrating a method for follow-up performance according to an embodiment of the present application;
fig. 10 is a schematic structural diagram illustrating an apparatus for operating a simulated musical instrument assembly according to an embodiment of the present application.
Detailed Description
The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
In this embodiment, a method for operating a simulated musical instrument component is provided, and is applied to a terminal having a touch recognition area, as shown in fig. 1, and the method includes:
step 101, acquiring performance trigger information generated by a user through trigger operation of an equipment sensor of a simulated musical instrument component, wherein the simulated musical instrument component is provided by a game client;
the embodiment of the application is mainly applied to intelligent terminals, for example, intelligent electronic devices such as smart phones and tablet computers, the device sensor simulating musical instruments can be specifically a display screen with a touch identification function of the intelligent electronic devices, and in addition, the device sensor can also comprise sound receiving equipment corresponding to the intelligent electronic devices. The specific application scene can be a scene for operating simulated musical instrument components in a game, a musical instrument simulation playing function is provided in the game, and a user can perform musical instrument simulation playing in the game, so that the playing method of the game is enriched, and the user experience of game players is improved.
The existing musical instrument simulation APP and the music game APP are carried out by utilizing native programs which are self-contained or supported in a terminal operating system, and the native programs directly process touch operation information to generate corresponding playing sounds. In the embodiment, in a module with a performance function of a game program, touch operation information must be processed under a game engine framework, but cannot be directly processed by a native program of a terminal operating system. According to the method and the device, the GPU is required to be utilized to perform data processing on the touch operation information frame by frame under the game engine framework, and the target sound characteristic information is determined.
In the embodiment, after a user initiates a simulation playing instruction for a simulated musical instrument in a game scene, in response to the simulation playing instruction, a simulation playing interface is provided in the game, the specific simulation playing interface can be displayed in a touch identification area of the intelligent electronic terminal device, the simulation playing interface corresponds to a preset playing area, and the user can perform triggering operation through a touch control of the touch preset playing area to generate playing triggering information, so that simulation playing of the simulated musical instrument is realized. The playing trigger information may include touch operation information, or may also include touch operation information and sound wave information, for example, the touch operation information is taken as an example, the simulation image of the simulated musical instrument is displayed in the preset playing area, as shown in fig. 2, a simulation playing interface of the drum is shown, and a user may implement simulation playing on the drum by touching the touch control of the preset playing area. When the user touches the preset playing area, the intelligent terminal can collect touch operation information of the user on the preset playing area, wherein the touch operation information specifically includes a touch position, touch force and touch time, for example, a position corresponding to "1" shown in fig. 2 is touched with a first force at 0 min 0 sec (the actual time precision is higher, which is only exemplified here), then the touch position is "1", the touch force is the first force, and the touch time is 0 min 0 sec. In the process of playing a real musical instrument, a player enables the musical instrument to sound through contact action generated between the player and the musical instrument, and specifically, different sound sounds are caused by contact positions and contact force generated between the player and the musical instrument, so that touch position and touch force information can be collected in simulation playing of the simulated musical instrument, so that sound characteristics played by the player are determined to perform simulation, and in addition, touch time information can be used for determining characteristics expressing playing rhythm. The touch position information can be acquired based on the fingerprint sensor built in the intelligent terminal, and the touch force information can be acquired based on the pressure sensor.
Step 102, determining target sound characteristic information corresponding to the playing trigger information, wherein the target sound characteristic information comprises target sound time information, target sound frequency information and target sound amplitude information;
in the embodiment of the present application, for example, after the touch position, the touch force and the touch time information are collected, determining target sound characteristic information corresponding to the touch operation information according to a preset note rule, taking a piano as an example, each key on the piano corresponds to a corresponding key sound, each key sound corresponds to a specific pitch, as is known from the principle of sound generation, the pitch of a sound is frequency dependent, and each key sound has its specific frequency, and therefore, in a specific application scenario, corresponding target audio frequency information may be determined based on the touch position, and further, the loudness of the sound is related to the amplitude, the larger the degree of pressing force on the key when playing a real piano, the higher the loudness of sound, and therefore, the corresponding target sound amplitude information can be determined based on the degree of touch, in addition, the target sound time information corresponding to each target sound can be determined according to the touch time.
Step 103, determining and playing the target sound source based on the sample sound source file and the target sound characteristic information, wherein the sample sound source file comprises at least one sample sound source and sample sound frequency information corresponding to each sample sound source.
In the embodiment of the present application, after obtaining the target sound characteristic information, a sample sound source matching the target sound characteristic information may be queried in a pre-established sample sound source file, specifically, the sample sound source file includes at least one sample sound source, and the sample sound source file further records sample sound frequency information corresponding to each sample sound source, and because sound frequencies corresponding to different pitches are different, a sample sound source matching the target sound frequency may be found in the sample sound source file, and then the loudness adjustment is performed on the matched sample sound source based on the target sound amplitude information to determine the target sound source, and further the arrangement order and duration of each target sound are determined according to the touch time corresponding to each target sound, so as to play the target sound source, so as to implement the performance simulation of the simulated musical instrument, and make the finally played target sound source match with the touch operation information of the user in the intelligent terminal, the target sound source file generation method based on the touch control position can determine target sound frequency information according to the touch control position to express the pitch of sound, and can also determine target sound amplitude information according to touch control strength to express the loudness of sound, so that the performance technique of a player can be expressed beneficially, the target sound source is determined based on the sample sound source and the target sound characteristic information stored in the sample sound source file, the sound source files of all pitches do not need to be exhausted in the sample sound source file, only a small number of sample sound sources need to be stored in the sample sound source file, the target sound sources of different pitches and different loudness can be generated according to the sample sound frequency and the target sound frequency corresponding to the sample sound sources and by combining the target sound amplitude information, the data volume of the sample sound source file is reduced, and memory resources.
By applying the technical scheme of the embodiment, performance trigger information generated by a user through trigger operation on a simulated musical instrument device sensor is acquired, target sound characteristic information corresponding to sound simulated and performed by the user is determined according to the performance trigger information, wherein the target sound characteristic information comprises target sound time information used for expressing target sound generation time, target sound frequency information used for expressing target sound height and target sound amplitude information used for expressing target sound loudness, and further, a target sound source is determined and played based on the target sound characteristic information and a sample sound source in a pre-established sample sound source file. Compared with the mode of determining the one-to-one corresponding target sound source by means of the touch position in the prior art, the method and the device for determining the target sound source can determine the target sound frequency and the target sound amplitude based on the performance trigger information generated by the user through the trigger operation of the device sensor, thereby determining the target sound source by combining a small amount of sample sound sources, enabling the target sound source to reflect the target sound pitch and the target sound volume, showing the performance technique of a player, enabling the simulation effect to be closer to the performance effect of a real musical instrument, reducing the data volume of the sample sound source files, and saving memory resources.
In this embodiment of the present application, optionally, in step 103, determining the target sound source based on the sample sound source file and the target sound feature information includes:
step 103-1, inquiring whether a target sample sound source matched with the target sound frequency information exists in the sample sound source file;
step 103-2, if the target sound source exists, determining the target sound source based on the target sample sound source, the target sound amplitude information and the target sound time information;
and 103-3, if the target sound source does not exist, carrying out tone modification treatment on the sample sound source according to preset sound source tone modification rules and target sound frequency information, and generating the target sound source by combining target sound amplitude information and target sound time information, wherein the preset sound source tone modification rules at least comprise tone modification rules matched with the twelve equal temperaments.
In the above embodiment, a plurality of sample sound sources and sample sound frequency information corresponding to each sample sound source are pre-stored in a sample sound source file, in order to determine a target sound source, it is first queried in the sample sound source file whether a sample sound source matching the target sound frequency information exists, if so, the target sound source is directly determined according to the matching sample sound source, specifically, a target sound source file may be generated based on the matching sample sound source and target sound amplitude information, so that the target sound source file is directly played, and when a playing module of an intelligent terminal plays, the sample sound source is loudness adjusted to realize playing of the target sound, that is, accurate response to a user touch operation is realized.
If the target sound source does not exist, the target sound source is generated according to a certain sample sound source in the sample sound source file, so that response to the touch operation of a user is achieved. As shown in fig. 3, a general view of twelve equal temperaments, also called "twelve-equal temperament", is a universal temperament in which a group of tones (octaves) in the world are divided into twelve semitone temperaments, and the ratio of the number of vibrations between each two adjacent temperaments is completely equal. By twelve-tone rhythm is meant that an octave (an octave) is divided into twelve equal parts in frequency proportion, each part being referred to as a semitone minor. One large two degree is two equal parts. Dividing an octave into 12 equal portions has some surprising complications. The frequency ratio of two tones of its pure five-degree interval (i.e. power 7/12 of 2) is very close to 1.5, and the difference between the five-degree interval of "phase rhythm of five degrees" and "twelve-degree rhythm" is basically inaudible to human ears. Twelve-tone temperament is widely used in symphony bands and keyboard instruments, and pianos are pitched according to twelve-tone temperament. The international standard sound stipulates that the frequency of a1 (a sounds of a small letter group, corresponding to 49A for piano keys) of a piano is 440 Hz; further, the frequency ratio of each adjacent semitone is specified to be 2^ (1/12) ≈ 1.059463, (explanation: this means "power of twelfth of 2"), and according to this specification, the frequency of each key tone on the piano can be obtained. If the frequency of # a1 adjacent to the right of a1 is 440 × 1.059463 ═
466.16372 Hz; further up, the frequency of b1 is 493.883213 Hz; similarly, the frequency of c2 is 523.25099.. said, and the frequency of # g1 adjacent to the left of a1 is 440 ÷ 1.059463 ═ 415.304735Hz... this manner of timbre fixation is "twelve-equal law". After the closest sample sound source is modified according to the target sound frequency, a target sound source file can be generated based on the modified sample sound source and the target sound amplitude information, so that the target sound source file is directly played, and when a playing module of the intelligent terminal plays, the loudness adjustment processing is performed on the modified sample sound source to realize the playing of the target sound, namely, the accurate response to the touch operation of the user is realized.
It should be noted that the process from step 103-1 to step 103-3 is performed in real time, and after the touch operation information of the user is obtained, the target sound source file is immediately generated according to the target sound characteristic information corresponding to the touch operation information and is played in real time, so as to ensure the real-time performance of the simulated performance, and to implement the operation.
In this embodiment of the application, optionally, when the simulated musical instrument is of the first musical instrument type, the device sensor includes a touch sensor corresponding to the touch identification area, and the performance trigger information includes touch operation information. Correspondingly, the step 101 may specifically include: the method comprises the steps of receiving touch sensing signals corresponding to a touch sensor, analyzing the touch sensing signals to determine touch operation information, wherein the touch operation information comprises touch positions, touch force and touch time, target sound frequency information is matched with the touch positions, target sound amplitude information is matched with the touch force, and target sound time information is matched with the touch time. The target tone characteristic information further comprises touch gesture information matched with the touch position and the touch time, and the touch gesture information at least comprises a sliding gesture and a clicking gesture.
In addition, when the simulated musical instrument is of a second musical instrument type, the device sensor comprises a touch sensor corresponding to the touch identification area and a sound sensor corresponding to the sound wave acquisition position, and the playing triggering information comprises touch operation information and sound wave input information. Correspondingly, the step 101 may specifically include: receiving a touch sensing signal corresponding to the touch sensor and a sound wave signal corresponding to the sound sensor; the method comprises the steps of analyzing a touch sensing signal to determine touch operation information, analyzing a sound wave signal to determine sound wave input information, wherein the touch operation information comprises a touch position and touch time, the sound wave input information comprises a sound wave peak value, target sound frequency information is matched with the touch position, target sound amplitude information is matched with the sound wave peak value, and target sound time information is matched with the touch time.
In the above embodiment, the target tone characteristic information of different types of simulated musical instruments may be determined based on different collected information, where the first musical instrument type is a musical instrument type that can realize simulated performance only by performing touch operation on the preset performance area, such as a piano, a zither, and the like, and the second musical instrument type is a musical instrument type that requires sound wave signals to be input to the preset sound wave collecting device in addition to performing touch operation on the preset performance area, such as an Xun-shaped instrument, a horn, and the like.
When the simulated musical instrument is of the first musical instrument type, the target tone characteristic information further includes touch gesture information obtained by analyzing a touch position and touch time, and according to different musical instrument types, the touch gesture information may include a slide gesture and a click gesture, for example, when the musical instrument is a stringed instrument such as a koto, the slide gesture may include a slide, a scratch, and a finger, and the click gesture may be a press, fig. 4 is a schematic diagram of an operation description of a simulated playing of the stringed instrument according to the embodiment of the present application, and as shown in fig. 4, the operation gesture description of the slide (a gesture for generating a slide tone), the finger shake, and the scratch is included. When the target sound source is played, the target sound source needs to be played based on the matched sample sound source or the modified sample sound source, the target sound amplitude information and the gesture information.
When the simulated musical instrument is of a second musical instrument type, the touch operation information of the user on the preset playing area is received, and the sound wave signal input by the user is collected through the preset sound wave collecting device, for example, the preset sound wave collecting device can be a sound receiving device which is internally or externally arranged on an intelligent terminal, the user can blow air towards the sound receiving device or speak towards the sound receiving device, the sound receiving device can collect the sound wave signal, and after the sound wave signal is collected, the average peak value and/or touch strength of the sound wave can be analyzed according to the sound wave signal to determine the target sound amplitude information so as to determine and play the target sound source. Fig. 5 shows a simulated playing interface of an Xun, an ancient egg-shaped, holed wind instrument, with the outlet of the Xun displayed near the sound receiving device.
Figure 6 shows a flow chart of a method for operating a simulated musical instrument assembly according to an embodiment of the present application, as shown in fig. 6, the sound source device sensor may include a position sensor, a pressure sensor, a touch recognition sensor, a sound receiving device, and the like corresponding to a preset playing area, the sound source characteristic data (which may include touch operation information and sound wave signals) collected by the sound source device sensor is received by the receiver, inquiring whether a sample sound source matched with the sound source characteristic data exists in the sample sound source pool according to the received sound source data, if so, adding the matched sample sound source into the cache pool, if not, generating an actual sound source matched with the sound source characteristic data according to the sample sound source, and adding the actual sound source into the cache pool, and finally processing and playing the sound source data in the cache pool by the equipment sensor according to the sound source characteristic data.
In addition, the embodiment of the application also provides several specific application scenes for simulating the musical instrument to play, such as a broadcasting scene, namely, other users receive and play the playing music of the player; following the playing scene, namely, simulating the playing of the played or preset music by the user; recording and playing back scenes, namely saving music played by a user and playing back the music when needed.
In the broadcast scenario, that is, before step 101, when receiving a broadcast performance instruction, after step 103, the method further includes:
104, generating a target sound source file based on the target sound source and the playing duration corresponding to the target sound source;
and 105, when the playing time length corresponding to the target sound source file exceeds a first preset time length, sending the target sound source file to a preset server, so that the preset server generates a broadcast sound source file corresponding to a second preset time length based on the target sound source file and sends the broadcast sound source file to a broadcast object terminal, wherein the first preset time length is greater than the second preset time length.
In the foregoing embodiment, when the user indicates that the performance needs to be broadcasted before the simulated performance, after determining the target sound source corresponding to each target sound in real time, a target sound source file needs to be generated based on the target sound source, where the target sound source file specifically includes two types, a first type of target sound source file includes a sample sound source matching the target sound or a modified sample sound source matching the target sound, and target sound characteristic information, and a second type of target sound source file includes a sound source processed according to the target sound characteristic information on the sample sound source matching the target sound or the modified sample sound source matching the target sound. Note that the target sound source file is not generated at one time, but is gradually added as the user performs. Since the target sound source file needs to be forwarded to the broadcast object through the server in the broadcast scene, and is influenced by uncertainty factors such as network fluctuation, in order to ensure the playing effect of the broadcast object, a delayed broadcast mode may be adopted, that is, the music heard by the broadcast object is the music played by the performer for a period of time, for example, the music heard by the broadcast party is delayed by 30 seconds, that is, the music heard by the broadcaster is the music played by the performer for 30 seconds. In this scenario, as shown in fig. 7, touch operation information and/or sound wave information is generated based on player input information, a target sound source is gradually added to a target sound source file according to the touch operation information and/or the sound wave information, when a playing time corresponding to the target sound source in the target sound source file is less than a first preset time, the target sound source is continuously generated and added to the target sound source file, when the playing time is greater than or equal to the first preset time, the target sound source file is started to be sent to a server and newly added content in the target sound source file is continuously sent to the server, after the server receives the target sound source file, a broadcast sound source file is generated according to the received target sound source file, wherein the broadcast sound source file is obtained by splitting the target sound source file according to a second preset time, and the second preset time is less than the first preset time, for example, the server sequentially transmits the broadcast source files so that the broadcast source files are transmitted to the broadcasting object once every 5 seconds, thereby securing the broadcasting effect. The automatic player can be used for assembling the target sound source based on the tone color of the simulated musical instrument, the audio processor can be used for carrying out loudness adjustment on the assembled target sound source based on the target sound amplitude information corresponding to the target sound source, and finally outputting the sound matched with the playing operation of the user, so that the output sound is more matched with the playing manipulation of the user and is closer to the playing effect of the real musical instrument.
In the application scenario, specifically, after receiving the broadcast sound source file, the broadcast object terminal determines the broadcast time of the broadcast sound source file according to the target sound time information corresponding to the target sound source in the broadcast sound source file and the third preset time length, and plays the broadcast sound source file according to the broadcast time.
In the foregoing embodiment, after receiving the broadcast sound source file from the server, the broadcast target terminal may directly play the broadcast sound source file, or may buffer the broadcast sound source file for a period of time and then perform broadcast playing, for example, the time corresponding to the first target sound in the broadcast sound source file is 0 minutes and 0 seconds, and the third preset time is 30 seconds, and then start to play the broadcast sound source file at 0 minutes and 30 seconds, thereby implementing the broadcast effect in the game.
When a recording and playing instruction is received in a recording and playing scene, namely before step 101, after step 103, the method further comprises:
step 106, generating a target sound source file based on the target sound source, storing and sending the target sound source file to a preset server;
and step 107, responding to the playback triggering instruction, and playing based on the local target sound source file or the target sound source file acquired from the preset server.
In the above embodiment, as shown in fig. 8, in the playing process of the user, the input content of the player is recorded, the target sound source is added to the target sound source file in real time, or the target sound source file is generated based on the target sound source after the playing of the user is finished, the target sound source file is stored locally and uploaded to the preset server, when the playback trigger instruction is received, the corresponding target sound source file is searched locally or the target sound source file is downloaded from the preset server according to the file indicated by the playback trigger instruction, and the target sound source file is played. In addition, a pre-configured music file may also be played, and in a scenario where the configuration file is played, the configuration file may be translated by a translation tool to obtain a sound source file, for example, the configuration file is a music score, and notes in the music score may be analyzed to obtain a sound source corresponding to each note.
In the embodiment of the present application, specifically, for a broadcast scene and a recording playback scene, if the target sound feature information includes a plurality of target sound sources, the plurality of target sound feature information correspond to the plurality of target sound sources; when determining the target sound source, the play time offset should be determined at the same time, which specifically includes: the method comprises the steps of obtaining first target sound time information corresponding to a first target sound source in target sound sources, and determining play time offset based on the first target sound time information and current time, wherein the play time offset is used for determining play time corresponding to the target sound source in a broadcast sound source file and is used for responding to a playback trigger instruction to determine play time corresponding to the target sound source in the target sound source file.
In the above embodiment, when performing broadcast playing or recording playback playing, in order to ensure that the simulated playing music is more matched with the actual touch control operation of the user, first, according to the touch control time corresponding to the touch control operation of the user, the target sound time corresponding to the first target sound source in the target sound sources, that is, the first target sound time information is obtained, and the playing time offset is calculated based on the current time and the first target sound time information, so that the playing is performed based on the playing time offset when the simulated playing music is played. Specifically, the current time is the play start time, the first target sound source is played at the play start time, the play time of the other target sound sources is determined based on the play time offset and the target sound time corresponding to the other target sound sources, and the play time of the other target sound sources is the sum of the touch time and the play time offset.
When the following performance scene is received, namely before step 101, after step 103, the method further comprises:
step 108, generating and storing a target sound source file based on the target sound source;
step 109, responding to the following triggering instruction, analyzing a plurality of following note information corresponding to the target sound source file, wherein each following note information comprises a following position, a following strength, a following time and a following gesture;
and step 110, outputting the note prompt information corresponding to the following note information one by one based on the following note information, and playing the sound source corresponding to the following note information in the target sound source file after receiving the operation feedback corresponding to the note prompt information.
The above steps 108 to 110 provide a method for performing following playing based on the simulated playing file of the user, after the user performs the simulated musical instrument, the playing content of the user is saved, the target sound source file is generated and saved based on the target sound source, and the target sound source file is used as the following sound source file, when the following trigger instruction is received, as shown in fig. 9, the following sound source file is read, and the file is analyzed to obtain a plurality of following note information, since the music corresponding to the note is related to the pitch, the loudness, the duration and the playing method, the following note information specifically includes the following position, the following strength, the following time and the following gesture, and the corresponding note prompt information is displayed one by one based on the following note information, and then the corresponding sound source in the target sound source file is played after the operation feedback corresponding to the note prompt information is received, and ending the following performance until all the sound sources are played. The following note prompt information can be specifically used for displaying an icon at a following position for prompting, for example, the icon is darker when the following force is larger, the icon is larger when the following time is longer, the following gesture is represented by an arrow, the arrow points to the direction in which the finger should slide, and the following time can be represented in a beat prompt mode.
In the following performance scene, the target sound source file may be generated in real time during the performance of the user, or may be generated once after the performance of the user is finished.
In the embodiment of the application, a preset sound source file can be played in a following manner, specifically, a plurality of following note information corresponding to the preset sound source file is analyzed in response to a following playing instruction of the preset sound source file, wherein each piece of following note information comprises a following position, a following strength, a following time and a following gesture; outputting the note prompt information corresponding to the following note information one by one based on the following note information, and playing the sound source corresponding to the following note information in the preset sound source file after receiving the operation feedback corresponding to the note prompt information.
In the above embodiment, the following sound source file is a sound source file preset and configured by a planning staff, when a following playing instruction for the preset sound source file is received, the preset sound source file is analyzed to obtain a plurality of following note information, because music corresponding to notes is related to pitch, loudness, duration and playing manipulation, the following note information specifically includes following position, following strength, following time and following gesture, corresponding note prompt information is displayed one by one based on the following note information, and then after receiving operation feedback corresponding to the note prompt information, corresponding sound sources in the preset sound source file are played until all the sound sources are played, and the following playing is finished.
Further, as a specific implementation of the method in fig. 1, an embodiment of the present application provides an apparatus for running a simulated musical instrument assembly for a game client, as shown in fig. 10, the apparatus includes:
a performance information receiving module 901 for acquiring performance trigger information generated by a user through a trigger operation with a device sensor of a simulated musical instrument component provided by a game client;
a characteristic information obtaining module 902, configured to determine target sound characteristic information corresponding to the performance triggering information, where the target sound characteristic information includes target sound time information, target sound frequency information, and target sound amplitude information;
and the target sound source determining module 903 is configured to determine and play a target sound source based on a sample sound source file and the target sound feature information, where the sample sound source file includes at least one sample sound source and sample sound frequency information corresponding to each sample sound source.
Optionally, the target sound source determining module 903 specifically includes:
the sample sound source matching unit 9031 is configured to query whether a target sample sound source matching the target sound frequency information exists in the sample sound source file;
a target sound source determining unit 9032, configured to determine, if the target sound source exists, a target sound source based on the target sample sound source, the target sound amplitude information, and the target sound time information;
and the sample sound source tone changing unit 9033 is configured to, if the sample sound source tone changing unit does not exist, perform tone changing processing on the sample sound source according to a preset sound source tone changing rule and the target sound frequency information, and generate a target sound source by combining the target sound amplitude information and the target sound time information, where the preset sound source tone changing rule at least includes a tone changing rule matched with the twelve equal temperaments.
Optionally, the simulated musical instrument is of a first musical instrument type, the device sensor includes a touch sensor corresponding to the touch identification area, and the playing trigger information includes touch operation information;
the performance information receiving module 901 specifically includes:
the first performance information analyzing unit 9011 is configured to receive a touch sensing signal corresponding to the touch sensor, and analyze the touch sensing signal to determine touch operation information, where the touch operation information includes a touch position, a touch force, and a touch time, the target tone frequency information is matched with the touch position, the target tone amplitude information is matched with the touch force, and the target tone time information is matched with the touch time.
Optionally, the target tone feature information further includes touch gesture information matched with the touch position and the touch time, and the touch gesture information at least includes a slide gesture and a click gesture.
Optionally, the simulated musical instrument is of a second musical instrument type, the device sensor includes a touch sensor corresponding to the touch identification area and a sound sensor corresponding to the sound wave acquisition position, and the playing trigger information includes touch operation information and sound wave input information;
the performance information receiving module 901 specifically includes:
the signal receiving unit 9012 is configured to receive a touch sensing signal corresponding to the touch sensor and a sound wave signal corresponding to the sound sensor;
the second performance information analyzing unit 9013 is configured to analyze the touch sensing signal to determine touch operation information, and analyze the acoustic wave signal to determine acoustic wave input information, where the touch operation information includes a touch position and touch time, the acoustic wave input information includes an acoustic wave peak value, the target sound frequency information is matched with the touch position, the target sound amplitude information is matched with the acoustic wave peak value, and the target sound time information is matched with the touch time.
Optionally, the apparatus further comprises:
a broadcast instruction receiving module 904 for receiving a broadcast performance instruction before acquiring performance trigger information generated by a user through a trigger operation with an apparatus sensor simulating a musical instrument component;
the first file generation module 905 is configured to, after determining a target sound source, generate a target sound source file based on the target sound source and a playing duration corresponding to the target sound source;
the first file sending module 906 is configured to send the target sound source file to the preset server when the playing time duration corresponding to the target sound source file exceeds a first preset time duration, so that the preset server generates a broadcast sound source file corresponding to a second preset time duration based on the target sound source file and sends the broadcast sound source file to the broadcast object terminal, where the first preset time duration is greater than the second preset time duration.
Optionally, after receiving the broadcast sound source file, the broadcast object terminal determines the broadcast time of the broadcast sound source file according to the target sound time information corresponding to the target sound source in the broadcast sound source file and a third preset time length, and plays the broadcast sound source file according to the broadcast time.
Optionally, the apparatus further comprises:
a recording instruction receiving module 907 for receiving a recording performance instruction before acquiring performance trigger information generated by a user through a trigger operation on an equipment sensor simulating a musical instrument component;
the second file generation module 908 is configured to, after determining the target sound source, generate a target sound source file based on the target sound source, store the target sound source file, and send the target sound source file to the preset server;
and a target sound source playback module 909, configured to play based on a local target sound source file or a target sound source file obtained from a preset server in response to a playback trigger instruction.
Optionally, the target sound characteristic information includes a plurality of target sound characteristic information, and the plurality of target sound characteristic information correspond to the plurality of target sound sources; the target sound source determining module 903 specifically includes:
the offset time determining unit 9034 is configured to acquire first target sound time information corresponding to a first target sound source in the target sound sources, and determine a play time offset based on the first target sound time information and the current time, where the play time offset is used to determine a play time corresponding to a target sound source in the broadcast sound source file and is used to determine a play time corresponding to a target sound source in the target sound source file in response to the playback trigger instruction.
Optionally, the apparatus further comprises:
a follow-up instruction receiving module 910, configured to receive a follow-up performance instruction before acquiring performance trigger information generated by a user through a trigger operation on an apparatus sensor simulating a component of a musical instrument;
a third file generating module 911, configured to generate and store a target sound source file based on the target sound source after determining the target sound source;
a first following note analyzing module 912, configured to respond to the following trigger instruction, and analyze a plurality of following note information corresponding to the target sound source file, where each following note information includes a following position, a following strength, a following time, and a following gesture;
the first following sound source playing module 913 is configured to output the note prompt information corresponding to the following note information one by one based on the following note information, and play the sound source corresponding to the following note information in the target sound source file after receiving the operation feedback corresponding to the note prompt information.
Optionally, the apparatus further comprises:
the second following note analyzing module 914 is configured to, in response to a following playing instruction for the preset sound source file, analyze a plurality of following note information corresponding to the preset sound source file, where each following note information includes a following position, a following strength, a following time, and a following gesture;
the second following sound source playing module 915 is configured to output the note prompt information corresponding to the following note information one by one based on the plurality of following note information, and play the sound source corresponding to the following note information in the preset sound source file after receiving the operation feedback corresponding to the note prompt information.
It should be noted that, other corresponding descriptions of the functional units related to the apparatus for operating a simulated musical instrument assembly provided in the embodiment of the present application may refer to the corresponding descriptions in the methods of fig. 1 to fig. 9, and are not repeated herein.
Based on the method shown in fig. 1 to 9, correspondingly, the present application further provides a storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the method for operating a simulated musical instrument assembly shown in fig. 1 to 9.
Based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.), and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the implementation scenarios of the present application.
Based on the above methods shown in fig. 1 to fig. 9 and the virtual device embodiment shown in fig. 10, in order to achieve the above object, the present application further provides a computer device, which may specifically be a personal computer, a server, a network device, and the like, where the computer device includes a storage medium and a processor; a storage medium for storing a computer program; a processor for executing a computer program to implement the above-described method of operating a simulated musical instrument assembly as shown in fig. 1 to 9.
Optionally, the computer device may also include a user interface, a network interface, a camera, Radio Frequency (RF) circuitry, sensors, audio circuitry, a WI-FI module, and so forth. The user interface may include a Display screen (Display), an input unit such as a keypad (Keyboard), etc., and the optional user interface may also include a USB interface, a card reader interface, etc. The network interface may optionally include a standard wired interface, a wireless interface (e.g., a bluetooth interface, WI-FI interface), etc.
It will be appreciated by those skilled in the art that the present embodiment provides a computer device architecture that is not limiting of the computer device, and that may include more or fewer components, or some components in combination, or a different arrangement of components.
The storage medium may further include an operating system and a network communication module. An operating system is a program that manages and maintains the hardware and software resources of a computer device, supporting the operation of information handling programs, as well as other software and/or programs. The network communication module is used for realizing communication among components in the storage medium and other hardware and software in the entity device.
Through the above description of the embodiments, those skilled in the art can clearly understand that the present application may be implemented by software plus a necessary general hardware platform, or may also be implemented by hardware to acquire performance trigger information generated by a user through a trigger operation on a simulated instrument device sensor, and determine target sound characteristic information corresponding to a sound simulated by the user according to the performance trigger information, where the target sound characteristic information includes target sound time information for representing a target sound generation time, target sound frequency information for representing a target sound height, and target sound amplitude information for representing a target sound loudness, and further determine a target sound source based on the target sound characteristic information and a sample sound source in a pre-established sample sound source file and play the target sound source. Compared with the mode of determining the one-to-one corresponding target sound source by means of the touch position in the prior art, the method and the device for determining the target sound source can determine the target sound frequency and the target sound amplitude based on the performance trigger information generated by the user through the trigger operation of the device sensor, thereby determining the target sound source by combining a small amount of sample sound sources, enabling the target sound source to reflect the target sound pitch and the target sound volume, showing the performance technique of a player, enabling the simulation effect to be closer to the performance effect of a real musical instrument, reducing the data volume of the sample sound source files, and saving memory resources.
Those skilled in the art will appreciate that the figures are merely schematic representations of one preferred implementation scenario and that the blocks or flow diagrams in the figures are not necessarily required to practice the present application. Those skilled in the art will appreciate that the modules in the devices in the implementation scenario may be distributed in the devices in the implementation scenario according to the description of the implementation scenario, or may be located in one or more devices different from the present implementation scenario with corresponding changes. The modules of the implementation scenario may be combined into one module, or may be further split into a plurality of sub-modules.
The above application serial numbers are for description purposes only and do not represent the superiority or inferiority of the implementation scenarios. The above disclosure is only a few specific implementation scenarios of the present application, but the present application is not limited thereto, and any variations that can be made by those skilled in the art are intended to fall within the scope of the present application.

Claims (14)

1. A method of operating a simulated musical instrument assembly for use in a game client, comprising:
acquiring performance trigger information generated by a user through a trigger operation on a device sensor of a simulated musical instrument component provided by the game client;
determining target tone characteristic information corresponding to the playing trigger information, wherein the target tone characteristic information comprises target tone time information, target tone frequency information and target tone amplitude information;
and determining and playing a target sound source based on a sample sound source file and the target sound characteristic information, wherein the sample sound source file comprises at least one sample sound source and sample sound frequency information corresponding to each sample sound source.
2. The method according to claim 1, wherein the determining a target sound source based on the sample sound source file and the target sound feature information specifically comprises:
inquiring whether a target sample sound source matched with the target sound frequency information exists in the sample sound source file;
if the target sound source exists, determining the target sound source based on the target sample sound source, the target sound amplitude information and the target sound time information;
if not, carrying out tone modification treatment on the sample sound source according to preset sound source tone modification rules and the target sound frequency information, and generating the target sound source by combining the target sound amplitude information and the target sound time information, wherein the preset sound source tone modification rules at least comprise tone modification rules matched with twelve equal temperaments.
3. The method of claim 1, wherein the simulated musical instrument is of a first musical instrument type, the device sensor comprises a touch sensor corresponding to a touch identification area, and the performance trigger information comprises touch operation information;
the acquiring of the performance trigger information generated by the user through the triggering operation of the device sensor of the simulated musical instrument component specifically includes:
receiving a touch sensing signal corresponding to the touch sensor, and analyzing the touch sensing signal to determine touch operation information, wherein the touch operation information includes a touch position, a touch force and a touch time, the target audio frequency information is matched with the touch position, the target audio amplitude information is matched with the touch force, and the target audio time information is matched with the touch time.
4. The method of claim 3, wherein the target tone feature information further comprises touch gesture information matched with the touch position and the touch time, and the touch gesture information comprises at least a slide gesture and a click gesture.
5. The method according to claim 1, wherein the simulated musical instrument is of a second musical instrument type, the device sensors include touch sensors corresponding to touch identification areas and sound sensors corresponding to sound wave collection positions, and the performance trigger information includes touch operation information and sound wave input information;
the acquiring of the performance trigger information generated by the user through the triggering operation of the device sensor of the simulated musical instrument component specifically includes:
receiving a touch sensing signal corresponding to the touch sensor and an acoustic wave signal corresponding to the acoustic sensor;
analyzing the touch sensing signal to determine touch operation information, and analyzing the sound wave signal to determine sound wave input information, wherein the touch operation information comprises a touch position and touch time, the sound wave input information comprises a sound wave peak value, the target sound frequency information is matched with the touch position, the target sound amplitude information is matched with the sound wave peak value, and the target sound time information is matched with the touch time.
6. The method of claim 1,
before acquiring performance trigger information generated by a user through a trigger operation of a device sensor of a simulated musical instrument component, the method further comprises:
receiving a broadcast performance instruction;
after determining the target audio source, the method further comprises:
generating a target sound source file based on the target sound source and the playing duration corresponding to the target sound source;
and when the playing time length corresponding to the target sound source file exceeds a first preset time length, sending the target sound source file to a preset server, so that the preset server generates a broadcast sound source file corresponding to a second preset time length based on the target sound source file and sends the broadcast sound source file to a broadcast object terminal, wherein the first preset time length is greater than the second preset time length.
7. The method according to claim 6, wherein after receiving the broadcast source file, the broadcast target terminal determines the broadcast time of the broadcast source file according to target sound time information corresponding to the target sound source in the broadcast source file and a third preset time duration, and plays the broadcast source file according to the broadcast time.
8. The method of claim 1,
before acquiring performance trigger information generated by a user through a trigger operation of a device sensor of a simulated musical instrument component, the method further comprises:
receiving a recording playing instruction;
after determining the target audio source, the method further comprises:
generating a target sound source file based on the target sound source, storing and sending the target sound source file to a preset server;
responding to a playback triggering instruction, and playing based on the local target sound source file or the target sound source file acquired from the preset server.
9. The method according to any one of claims 6 to 8, wherein the target sound characteristic information includes a plurality of target sound characteristic information, and a plurality of target sound characteristic information correspond to a plurality of target sound sources; the determining the target sound source specifically further includes:
acquiring first target sound time information corresponding to a first target sound source in the target sound sources, and determining a play time offset based on the first target sound time information and the current time, wherein the play time offset is used for determining play time corresponding to the target sound source in the broadcast sound source file and is used for responding to the playback trigger instruction to determine play time corresponding to the target sound source in the target sound source file.
10. The method of claim 1,
before acquiring performance trigger information generated by a user through a trigger operation of a device sensor of a simulated musical instrument component, the method further comprises:
receiving a follow-up performance instruction;
after determining the target audio source, the method further comprises:
generating and saving a target sound source file based on the target sound source;
responding to a following triggering instruction, and analyzing a plurality of following note information corresponding to the target sound source file, wherein each following note information comprises a following position, a following strength, a following time and a following gesture;
outputting the note prompt information corresponding to the following note information one by one based on the following note information, and playing the sound source corresponding to the following note information in the target sound source file after receiving the operation feedback corresponding to the note prompt information.
11. The method of claim 1, further comprising:
responding to a follow-up playing instruction of a preset sound source file, and analyzing a plurality of follow-up note information corresponding to the preset sound source file, wherein each follow-up note information comprises a follow-up position, follow-up force, follow-up time and a follow-up gesture;
outputting the note prompt information corresponding to the following note information one by one based on the following note information, and playing the sound source corresponding to the following note information in the preset sound source file after receiving the operation feedback corresponding to the note prompt information.
12. An apparatus for operating a simulated musical instrument assembly for use in a game client, comprising:
a performance information receiving module for acquiring performance trigger information generated by a user through a trigger operation with an apparatus sensor of a simulated musical instrument component provided by the game client;
the characteristic information acquisition module is used for determining target sound characteristic information corresponding to the playing trigger information, wherein the target sound characteristic information comprises target sound time information, target sound frequency information and target sound amplitude information;
and the target sound source determining module is used for determining and playing a target sound source based on a sample sound source file and the target sound characteristic information, wherein the sample sound source file comprises at least one sample sound source and sample sound frequency information corresponding to each sample sound source.
13. A storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements a method of operating a simulated musical instrument assembly as claimed in any of claims 1 to 11.
14. A computer device comprising a storage medium, a processor and a computer program stored on the storage medium and executable on the processor, characterized in that the processor implements the method of operating a simulated musical instrument assembly of any of claims 1 to 11 when executing the computer program.
CN202011192393.2A 2020-10-30 2020-10-30 Method and device for operating simulated musical instrument assembly, storage medium and computer equipment Active CN112420006B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011192393.2A CN112420006B (en) 2020-10-30 2020-10-30 Method and device for operating simulated musical instrument assembly, storage medium and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011192393.2A CN112420006B (en) 2020-10-30 2020-10-30 Method and device for operating simulated musical instrument assembly, storage medium and computer equipment

Publications (2)

Publication Number Publication Date
CN112420006A true CN112420006A (en) 2021-02-26
CN112420006B CN112420006B (en) 2022-08-05

Family

ID=74827200

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011192393.2A Active CN112420006B (en) 2020-10-30 2020-10-30 Method and device for operating simulated musical instrument assembly, storage medium and computer equipment

Country Status (1)

Country Link
CN (1) CN112420006B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113641851A (en) * 2021-08-11 2021-11-12 乐聚(深圳)机器人技术有限公司 Music score previewing method and device, terminal equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04125693A (en) * 1990-09-18 1992-04-27 Yamaha Corp Electronic musical instrument
CN1395233A (en) * 2001-07-05 2003-02-05 英群企业股份有限公司 Simulator of multimedia music instrument and its method
CN103235641A (en) * 2013-03-17 2013-08-07 浙江大学 6-dimensional sensory-interactive virtual keyboard instrument system and realization method thereof
CN105976800A (en) * 2015-03-13 2016-09-28 三星电子株式会社 Electronic device, method for recognizing playing of string instrument in electronic device
CN107705774A (en) * 2017-11-23 2018-02-16 李超鹏 A kind of electronic musical instrument analog machine
CN109448131A (en) * 2018-10-24 2019-03-08 西北工业大学 A kind of virtual piano based on Kinect plays the construction method of system
CN110689868A (en) * 2018-08-30 2020-01-14 司乃捷 Musical instrument playing processing method, musical instrument and musical instrument system
CN110895920A (en) * 2018-09-13 2020-03-20 程建铜 Bridge and plucked instrument

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04125693A (en) * 1990-09-18 1992-04-27 Yamaha Corp Electronic musical instrument
CN1395233A (en) * 2001-07-05 2003-02-05 英群企业股份有限公司 Simulator of multimedia music instrument and its method
CN103235641A (en) * 2013-03-17 2013-08-07 浙江大学 6-dimensional sensory-interactive virtual keyboard instrument system and realization method thereof
CN105976800A (en) * 2015-03-13 2016-09-28 三星电子株式会社 Electronic device, method for recognizing playing of string instrument in electronic device
CN107705774A (en) * 2017-11-23 2018-02-16 李超鹏 A kind of electronic musical instrument analog machine
CN110689868A (en) * 2018-08-30 2020-01-14 司乃捷 Musical instrument playing processing method, musical instrument and musical instrument system
CN110895920A (en) * 2018-09-13 2020-03-20 程建铜 Bridge and plucked instrument
CN109448131A (en) * 2018-10-24 2019-03-08 西北工业大学 A kind of virtual piano based on Kinect plays the construction method of system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113641851A (en) * 2021-08-11 2021-11-12 乐聚(深圳)机器人技术有限公司 Music score previewing method and device, terminal equipment and storage medium

Also Published As

Publication number Publication date
CN112420006B (en) 2022-08-05

Similar Documents

Publication Publication Date Title
JP5642296B2 (en) Input interface for generating control signals by acoustic gestures
CN110211556B (en) Music file processing method, device, terminal and storage medium
CN111602193B (en) Information processing method and apparatus for processing performance of musical composition
US20170263231A1 (en) Musical instrument with intelligent interface
US20220383842A1 (en) Estimation model construction method, performance analysis method, estimation model construction device, and performance analysis device
Meneses et al. GuitarAMI and GuiaRT: two independent yet complementary augmented nylon guitar projects
CN109410972B (en) Method, device and storage medium for generating sound effect parameters
CN112420006B (en) Method and device for operating simulated musical instrument assembly, storage medium and computer equipment
US20190295517A1 (en) Electronic musical instrument, method, and storage medium
KR100894866B1 (en) Piano tuturing system using finger-animation and Evaluation system using a sound frequency-waveform
CN111279412B (en) Sound device and sound control method
JPH10247099A (en) Sound signal coding method and sound recording/ reproducing device
CN112435644B (en) Audio signal output method and device, storage medium and computer equipment
JP2017027070A (en) Evaluation device and program
US10805475B2 (en) Resonance sound signal generation device, resonance sound signal generation method, non-transitory computer readable medium storing resonance sound signal generation program and electronic musical apparatus
Jaime et al. A new multiformat rhythm game for music tutoring
JP5969421B2 (en) Musical instrument sound output device and musical instrument sound output program
CN111883090A (en) Method and device for making audio file based on mobile terminal
JP5847049B2 (en) Instrument sound output device
US20240134459A1 (en) Haptic feedback method, system and related device for matching split-track music to vibration
US11398212B2 (en) Intelligent accompaniment generating system and method of assisting a user to play an instrument in a system
WO2023181570A1 (en) Information processing method, information processing system, and program
WO2022172732A1 (en) Information processing system, electronic musical instrument, information processing method, and machine learning system
JP3627675B2 (en) Performance data editing apparatus and method, and program
WO2014142201A1 (en) Device and program for processing separating data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant