CN116312431A - Electric tone key control method, apparatus, computer device, and storage medium - Google Patents

Electric tone key control method, apparatus, computer device, and storage medium Download PDF

Info

Publication number
CN116312431A
CN116312431A CN202310280808.9A CN202310280808A CN116312431A CN 116312431 A CN116312431 A CN 116312431A CN 202310280808 A CN202310280808 A CN 202310280808A CN 116312431 A CN116312431 A CN 116312431A
Authority
CN
China
Prior art keywords
target
song
playing
time period
audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310280808.9A
Other languages
Chinese (zh)
Other versions
CN116312431B (en
Inventor
王佳乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Ziyun Technology Co ltd
Original Assignee
Guangzhou Ziyun Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Ziyun Technology Co ltd filed Critical Guangzhou Ziyun Technology Co ltd
Priority to CN202310280808.9A priority Critical patent/CN116312431B/en
Publication of CN116312431A publication Critical patent/CN116312431A/en
Application granted granted Critical
Publication of CN116312431B publication Critical patent/CN116312431B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0091Means for obtaining special acoustic effects
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/031Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
    • G10H2210/081Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for automatic key or tonality recognition, e.g. using musical rules or a knowledge base
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/155Musical effects
    • G10H2210/265Acoustic effect simulation, i.e. volume, spatial, resonance or reverberation effects added to a musical sound, usually by appropriate filtering or delays
    • G10H2210/281Reverberation or echo
    • G10H2210/285Electromechanical effectors therefor, i.e. using springs or similar electromechanical audio delay units

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Electrophonic Musical Instruments (AREA)

Abstract

The application relates to an electric tone key control method, an electric tone key control device, computer equipment and a storage medium. The method comprises the following steps: when the playing operation of the system for the target song is detected, acquiring a target audio fragment of the target song in a target time period; the target time period is a time period in a preset time range after the current playing time; according to the target audio fragment, determining prediction key information corresponding to the target time period; the predicted key information is used for representing the key of the song corresponding to each playing moment in the target time period; and synchronizing the predicted key information to audio host software of the system, controlling an electric sound plug-in loaded in the audio host software, and generating electric sound effects matched with the target song based on the song key corresponding to each playing moment in the target time period. By adopting the method, the key information of the unplayed fragments can be automatically acquired and synchronized when songs are played, the key can be analyzed in advance to carry out flexible adjustment, and the generation of the matched electric sound effect aiming at the key change is ensured.

Description

Electric tone key control method, apparatus, computer device, and storage medium
Technical Field
The present application relates to the field of computer technology, and in particular, to a method, an apparatus, a computer device, a storage medium, and a computer program product for controlling an electric tone.
Background
With the development of sound card technology, the requirements for human sound processing based on sound card manufacturing effect are more and more extensive, for example, a sound plug-in can be adopted to generate a sound effect. The traditional method is generally that a user manually searches the basic tone of a song and manually modifies the basic tone and the musical scale on the electric tone plug-in, the difficulty in adjusting the basic tone of the electric tone is high, the processing efficiency is low, the basic tone of some parts in the same song can be changed, and the traditional method has poor flexibility and is easy to occur in the condition of voice tone running.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an electric tone control method, apparatus, computer device, storage medium, and computer program product that can solve the above-described problems.
In a first aspect, the present application provides a method for controlling a tone of an electric sound, the method comprising:
when the playing operation of the system for the target song is detected, acquiring a target audio fragment of the target song in a target time period; the target time period is a time period in a preset time range after the current playing time;
According to the target audio fragment, determining prediction key information corresponding to the target time period; the predicted key information is used for representing the key of the song corresponding to each playing moment in the target time period;
and synchronizing the predicted key information to audio host software of the system, controlling an electric sound plug-in unit loaded in the audio host software, and generating electric sound effects matched with the target song based on song keys corresponding to playing moments in the target time period.
In one embodiment, the obtaining the target audio clip of the target song in the target time period includes:
determining a target playing time according to the current playing time of the target song and the preset time range;
and taking the audio frequency fragment from the current playing time to the target playing time in the target song as the target audio frequency fragment.
In one embodiment, the determining, according to the target audio segment, prediction key information corresponding to the target time period includes:
acquiring audio data corresponding to each playing time in the target audio fragment; the audio data includes a plurality of audio frames of the target song;
And obtaining the song key corresponding to each playing time according to the key corresponding to each audio frame in each playing time, and taking the song key as the prediction key information corresponding to the target time period.
In one embodiment, the method further comprises:
and in the continuous playing process of the target song, after the end of the playing of the target time period corresponding to the latest acquired target audio fragment is detected, acquiring the target audio fragment of the target song in the next target time period as the latest acquired target audio fragment.
In one embodiment, the method further comprises:
and in the continuous playing process of the target song, when the generation of corresponding prediction key information aiming at the latest acquired target audio fragment is detected, acquiring the target audio fragment of the target song in the next target time period as the latest acquired target audio fragment.
In one embodiment, the controlling the electrical plug-in loaded in the audio host software to generate an electrical sound effect matched with the target song based on the song key corresponding to each playing time in the target time period includes:
According to the song basic tone corresponding to each playing time in the target time period, determining tone change information of the target song in a preset time range after the current playing time;
and generating the electric sound effect matched with the target song according to the playing progress of the target song by adopting the tone change information.
In one embodiment, the method further comprises:
displaying the playing progress of the target song in a first area of a playing display interface;
and displaying the song key corresponding to each playing time and the key related information thereof in the target time period corresponding to the target audio fragment in a second area of the playing display interface.
In a second aspect, the present application further provides an electric tone key control device, the device comprising:
the target audio fragment acquisition module is used for acquiring a target audio fragment of a target song in a target time period when the play operation of the system for the target song is detected; the target time period is a time period in a preset time range after the current playing time;
the prediction key information determining module is used for determining prediction key information corresponding to the target time period according to the target audio fragment; the predicted key information is used for representing the key of the song corresponding to each playing moment in the target time period;
And the electric tone control module is used for synchronizing the prediction tone information to the audio host software of the system, controlling an electric tone plug-in unit loaded in the audio host software, and generating electric tone effects matched with the target song based on the song tone corresponding to each playing moment in the target time period.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the steps of the electric tone base control method as described above when the computer program is executed.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the electric tone control method as described above.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of the electric tone control method as described above.
According to the method, the device, the computer equipment, the storage medium and the computer program product for controlling the electric tone, when the playing operation of the system for the target song is detected, the target audio fragment of the target song in the target time period is obtained, the target time period is a time period within a preset time range after the current playing time, then the predicted tone information corresponding to the target time period is determined according to the target audio fragment, the predicted tone information is used for representing the song tone corresponding to each playing time in the target time period, the predicted tone information is synchronized to audio host software of the system, the electric tone plug-in loaded in the audio host software is controlled, the electric tone effect matched with the target song is generated based on the song tone corresponding to each playing time in the target time period, the automatic acquisition and synchronization of the tone information of the unreplayed audio fragment are realized, the electric tone is automatically adjusted, the flexible adjustment of the tone is not required to be performed by the analysis of the tone corresponding to the song tone after the current playing time is predicted, the manual searching and modifying operation is not required, the electric tone control efficiency is improved, and the electric tone is accurately predicted, and the electric tone matching effect is ensured.
Drawings
FIG. 1 is a flow chart of a method of tone control in one embodiment;
FIG. 2 is a schematic diagram of an interface presentation in one embodiment;
FIG. 3 is a flow chart of another method of tone control in one embodiment;
FIG. 4 is a block diagram of an electrical tone control device in one embodiment;
FIG. 5 is an internal block diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for presentation, analyzed data, etc.) related in the present application are both information and data authorized by the user or sufficiently authorized by each party; correspondingly, the application also provides a corresponding user authorization entry for the user to select authorization or select rejection.
In one embodiment, as shown in fig. 1, a method for controlling a tone is provided, and this embodiment is applied to a terminal for illustration, where the terminal may be, but is not limited to, a personal computer, a notebook computer, a smart phone, a tablet computer, an internet of things device, and a portable wearable device. In this embodiment, the method includes the steps of:
Step 101, when detecting the playing operation of a system for a target song, acquiring a target audio fragment of the target song in a target time period; the target time period is a time period in a preset time range after the current playing time;
wherein a client of the electronic tone control software may be turned on, through which an audio play event of the system for a target song is detected, e.g. when a system play of song a is detected, song a may be taken as the target song.
In practical application, a preset time range can be obtained, when the playing operation of the system for the target song is detected, the target playing time can be determined according to the current playing time and the preset time range of the target song, and then an audio segment from the current playing time to the target playing time in the target song can be used as a target audio segment, so that the target audio segment is further subjected to basic tone analysis, and the target audio segment can comprise a plurality of audio frames of the target song at a plurality of playing times.
Specifically, the system may have a plurality of playing devices for audio output, and when the system is detected to play the song a, the playing device playing the song a may be used as a target playing channel, for example, a plurality of playing channels connected to the system may be acquired, through available screening processing on the plurality of playing channels, according to the detected audio playing event of the system for the target song, a target playing channel corresponding to the audio playing event is determined, and further, audio stream data output by the target playing channel may be acquired through a client of the electronic tone control software, so as to acquire a target audio clip.
In an example, when it is detected that the user starts playing song a (i.e., the target song), a future segment of a preset time range (e.g., 4 seconds) may be intercepted for the playing song a, for example, a starting playing time of 0 seconds (i.e., the current playing time), and then a song segment of 0 to 4 seconds (i.e., the target time period) may be intercepted as the target audio segment.
In an alternative embodiment, as shown in fig. 2, when the playing operation of the system on the target song is detected, the segment acquisition of the whole song may be planned according to the total song duration of the target song, for example, target audio segments in multiple target time periods, such as a song segment of 0 to 4 seconds, a song segment of 4 to 8 seconds, and the like, may be respectively acquired according to time sequence based on a preset time range (such as 4 seconds).
Step 102, determining prediction key information corresponding to the target time period according to the target audio fragment; the predicted key information is used for representing the key of the song corresponding to each playing moment in the target time period;
as an example, the song key corresponding to each playing time in the target period included in the predicted key information may be different keys, for example Bm, db, dbm, gbm, A, B, D, E, abm, am, dm, em, and may also include other keys, which is not specifically limited in this embodiment.
In a specific implementation, audio data corresponding to each playing time in the target audio clip can be obtained, the audio data can include a plurality of audio frames of the target song, and then a song key corresponding to each playing time can be obtained according to a key corresponding to each audio frame in each playing time and used as prediction key information corresponding to the target time period.
For example, for any playing time, the key of each corresponding audio frame can be obtained through key analysis, then the weights corresponding to different keys can be determined according to the number of different keys, the key analysis result of any playing time can be obtained, and then the song key corresponding to any playing time can be determined according to the key analysis result.
For another example, aiming at a plurality of key in the key analysis result at any playing time, the key corresponding to the maximum weight in the key analysis result, such as the key with the largest occurrence number, can be used as the song key corresponding to any playing time, so that the user can obtain the song played by the client and obtain the key analysis result based on the weight calculation, and the predicted song key can be accurately identified.
In an example, since a whole song is a key, the climax part in some songs has a rising and falling operation, and each song adopts a key which is not suitable for a whole song and cannot meet the requirement of generating electric sound effects, compared with the traditional method that the same key is adopted for a song, the key can cause the condition that the key has a voice running at the climax part; according to the technical scheme, the key analysis can be performed on the audio frequency fragment of the future accompaniment for a period of time (such as 4 seconds) in advance, so that the song key of the future period of time (such as 4 seconds) is obtained, the condition that a user runs the key due to the change of the key of the lifting key part in the song can be avoided, the generation of the electric sound effect is more stable, and the error rate of the voice running key is reduced.
Step 103, synchronizing the predicted key information to the audio host software of the system, controlling the electric sound plug-in unit loaded in the audio host software, and generating electric sound effects matched with the target song based on the song key corresponding to each playing time in the target time period.
After the predicted key information is obtained, the predicted key information can be synchronized to audio host software of the system, and because the audio host software can be loaded with the electric tone plug-in, the electric tone plug-in can be synchronized to the electric tone plug-in of the audio host software, and the electric tone plug-in is controlled to generate electric tone effects matched with a target song based on song keys corresponding to playing moments in a target time period.
In an example, the audio host software may be loaded with a plurality of plugins, such as reverberation, noise reduction, equalizer, and electric plug-in, and the electric plug-in needs to be consistent with the accompanying key to achieve the effect of repairing the sound.
In yet another example, through the client of the electric tone control software, the playing progress of the song played by the user can be obtained, the segments can be cut, the predicted tone information can be obtained by analyzing in advance, the playing progress can be circularly obtained to obtain the predicted tone information corresponding to each segment, and then the electric tone plug-in tone can be modified according to the predicted tone information to generate the electric tone effect matched with the played song.
Compared with the traditional method, the technical scheme of the embodiment ensures that a user does not need to manually operate by automatically processing the whole flow of the electric tone control; by analyzing the key in advance, the generated electric sound effect can be flexibly adjusted aiming at any tone changing part of the song, and the future song segment key obtained based on the advance judgment can be avoided when a user sings the song due to accompanying rising and falling of the tone.
According to the method for controlling the electric tone, when the playing operation of the system for the target song is detected, the target audio fragment of the target song in the target time period is obtained, then the prediction tone information corresponding to the target time period is determined according to the target audio fragment, and then the prediction tone information is synchronized to the audio host software of the system, the electric tone plug-in unit loaded in the audio host software is controlled, the electric tone effect matched with the target song is generated based on the song tone corresponding to each playing time in the target time period, the automatic obtaining and synchronization of the tone information of the non-played audio fragment during song playing are realized, the electric tone is automatically adjusted, the tone can be flexibly adjusted in advance through the prediction of the song tone corresponding to the time period after the current playing time, manual searching and modifying operation are not needed, the control efficiency of the electric tone is improved, the electric tone can be accurately predicted, and the electric tone effect matched with the generation of the tone change of the tone is guaranteed.
In one embodiment, the obtaining the target audio clip of the target song in the target time period may include the following steps:
determining a target playing time according to the current playing time of the target song and the preset time range; and taking the audio frequency fragment from the current playing time to the target playing time in the target song as the target audio frequency fragment.
In practical application, when it is detected that a user starts playing song a (i.e., a target song), a future segment in a preset time range (e.g., 4 seconds) may be intercepted for the playing song a, for example, the playing start time is 0 seconds (i.e., the current playing time), and if the target playing time to be intercepted is 4 seconds, the target audio segment may be obtained by intercepting a song segment of 0 to 4 seconds (i.e., the target time period).
In an alternative embodiment, for a client to collect audio stream data output by a target playback channel, a system API (Application Programming Interface, application program interface) may be used to obtain the target playback channel, and then a portaudio module with a loop-back WASapi may be used to read the audio stream output by the target playback channel as audio stream data.
In this embodiment, the target playing time is determined according to the current playing time and the preset time range of the target song, and then the audio segment from the current playing time to the target playing time in the target song is used as the target audio segment, so that the unreflected audio segment can be automatically obtained when the song is played, and data support is provided for further key analysis and prediction.
In one embodiment, the determining, according to the target audio segment, prediction key information corresponding to the target time period may include the following steps:
acquiring audio data corresponding to each playing time in the target audio fragment; the audio data includes a plurality of audio frames of the target song; and obtaining the song key corresponding to each playing time according to the key corresponding to each audio frame in each playing time, and taking the song key as the prediction key information corresponding to the target time period.
In an example, by acquiring audio data corresponding to each playing time in the target audio segment, for each playing time, a key of each audio frame corresponding to the playing time may be acquired based on key analysis, and then weights corresponding to different keys may be determined according to the number of different keys, so as to obtain a key analysis result of each playing time, so as to determine a song key corresponding to each playing time. The key analysis result can be displayed in a ranking list mode, and calculation can be performed based on the ranking list mode to obtain weights corresponding to different keys, so that the song keys corresponding to each playing time can be further determined. If the ranking list mode can be a column diagram display mode, columns with different colors or different textures can be used for marking according to different basic tones, the height of the columns corresponding to different basic tones can represent the weight of the columns, and other mark display modes can be used, so that the method is not particularly limited in the embodiment.
In an alternative embodiment, by performing a key analysis on the obtained audio stream, a scale corresponding to each audio frame at each playing time in the target audio segment can be obtained, a predicted key and a scale corresponding to each audio frame can be displayed, and a key and a scale corresponding to each playing time can be displayed for each playing time; other analyzed music theory related information can be displayed, and the embodiment is not particularly limited.
In this embodiment, by acquiring audio data corresponding to each playing time in the target audio segment, and further obtaining a song key corresponding to each playing time according to the key corresponding to each audio frame in each playing time, as predicted key information corresponding to the target time period, the key analysis can be performed based on weight calculation, and the predicted song key can be accurately identified.
In one embodiment, the method may further comprise the steps of:
and in the continuous playing process of the target song, after the end of the playing of the target time period corresponding to the latest acquired target audio fragment is detected, acquiring the target audio fragment of the target song in the next target time period as the latest acquired target audio fragment.
In a specific implementation, taking a song to be played by a user as an example, the playing start time is 0 second (i.e., the current playing time), a song segment of 0 to 4 seconds (i.e., the target time period) can be intercepted as a latest acquired target audio segment, and then when the playing progress reaches 4 seconds, i.e., when the playing end of the target time period corresponding to the latest acquired target audio segment is detected, a song segment of 4 to 8 seconds (i.e., the next target time period) can be intercepted for the current playing time for 4 seconds, so as to circularly acquire the playing progress segment and intercept the song segment.
In this embodiment, in the continuous playing process of the target song, after the end of the playing of the target time period corresponding to the target audio segment acquired last time is detected, the target audio segment of the target song in the next target time period is acquired and used as the target audio segment acquired last time, so that the playing progress can be circularly acquired to acquire the prediction key information corresponding to each segment, and the control efficiency of the electric tone key is improved.
In one embodiment, the method may further comprise the steps of:
and in the continuous playing process of the target song, when the generation of corresponding prediction key information aiming at the latest acquired target audio fragment is detected, acquiring the target audio fragment of the target song in the next target time period as the latest acquired target audio fragment.
In practical application, the client can segment and plan the whole song according to the total song duration of playing the song to obtain each song segment under the condition of not occupying excessive performance. Taking the example that the user starts playing a song, the playing time is 0 second (i.e. the current playing time), a song segment of 0 to 4 seconds (i.e. the target time period) may be intercepted as the latest acquired target audio segment, and then, in the case that it is detected that the 0 to 4 seconds audio segment is already analyzed, that is, when it is detected that corresponding prediction key information is generated for the latest acquired target audio segment, a song segment of 4 to 8 seconds (i.e. the next target time period) may be intercepted to acquire the song segment in a segmented manner.
In this embodiment, in the continuous playing process of the target song, when it is detected that the corresponding prediction key information is generated for the target audio segment acquired last time, the target audio segment of the target song in the next target time period is acquired as the target audio segment acquired last time, so that the prediction key information corresponding to each segment can be analyzed based on segment planning, and the electric tone control efficiency is improved.
In one embodiment, the controlling the electrical plug-in loaded in the audio host software to generate an electrical sound effect matched with the target song based on the song key corresponding to each playing time in the target time period may include the following steps:
According to the song basic tone corresponding to each playing time in the target time period, determining tone change information of the target song in a preset time range after the current playing time; and generating the electric sound effect matched with the target song according to the playing progress of the target song by adopting the tone change information.
As an example, the pitch change information may be used to characterize the pitch change of a particular pitch change portion of a song being played, such as the up-down state of the presence of a climax part of the song.
In an example, since the tone plug-in needs to be consistent with the key of accompaniment to achieve the tone trimming effect, by synchronizing the predicted key information to the tone plug-in of the audio host software, the key of the tone plug-in the audio host software can be automatically updated after the client analyzes the song key of the time period within the preset time range after the current playing time. Therefore, the modification of the key scale can be automatically completed for the electric sound plug-in according to the predicted key information, manual searching and modification operation are not needed, the key can be analyzed in advance to be flexibly adjusted, and the generation of the matched electric sound effect for the key change is ensured.
In this embodiment, the tone change information of the target song in the preset time range after the current playing time is determined according to the song key corresponding to each playing time in the target time period, and then the tone change information is adopted to generate the electric sound effect matched with the target song according to the playing progress of the target song, so that the key of the electric sound plug-in unit can be flexibly adjusted adaptively, and the situation of voice running caused by accompanying rising and falling of the tone is avoided.
In one embodiment, the method may further comprise the steps of:
displaying the playing progress of the target song in a first area of a playing display interface; and displaying the song key corresponding to each playing time and the key related information thereof in the target time period corresponding to the target audio fragment in a second area of the playing display interface.
In an example, as shown in fig. 2, the playing progress of the target song may be displayed in a first area of the playing display interface, the key and scale (i.e. the key of the song and its key related information) corresponding to each playing time in the target time period corresponding to the target audio clip may be displayed in a second area of the playing display interface, and other analyzed music theory related data may be displayed, which is not limited in this embodiment.
In this embodiment, by displaying the playing progress of the target song in the first area of the playing display interface and displaying the song key corresponding to each playing time and the key related information thereof in the second area of the playing display interface in the target time period corresponding to the target audio clip, the related prediction information obtained by the key analysis can be visually displayed, and the user can obtain the key of the song at the future time.
In one embodiment, as shown in fig. 3, a flow chart of another method of electric tone control is provided. In this embodiment, the method includes the steps of:
in step 301, when a playing operation of the system for the target song is detected, a target playing time is determined according to the current playing time of the target song and a preset time range. In step 302, the audio clip from the current playing time to the target playing time in the target song is used as the target audio clip. In step 303, audio data corresponding to each playing time in the target audio clip is obtained; the audio data includes a plurality of audio frames of the target song. In step 304, according to the key corresponding to each audio frame in each playing time, a song key corresponding to each playing time is obtained and used as the prediction key information corresponding to the target time period. In step 305, the predicted key information is synchronized to the audio host software of the system, and the electric sound plug-in loaded in the audio host software is controlled to generate electric sound effects matched with the target song based on the song key corresponding to each playing time in the target time period. In step 306, in the continuous playing process of the target song, after it is detected that the playing of the target time period corresponding to the latest acquired target audio segment is finished, the target audio segment of the target song in the next target time period is acquired as the latest acquired target audio segment. It should be noted that, the specific limitation of the above steps may be referred to the specific limitation of an electric tone control method, which is not described herein.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides an electric tone control device for realizing the electric tone control method. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation in the embodiments of one or more electric tone control devices provided below may refer to the limitation of the electric tone control method hereinabove, and will not be repeated here.
In one embodiment, as shown in fig. 4, there is provided an electric tone control apparatus comprising:
a target audio segment obtaining module 401, configured to obtain a target audio segment of a target song in a target time period when detecting a play operation of the system for the target song; the target time period is a time period in a preset time range after the current playing time;
a prediction key information determining module 402, configured to determine prediction key information corresponding to the target time period according to the target audio segment; the predicted key information is used for representing the key of the song corresponding to each playing moment in the target time period;
and the electric tone control module 403 is configured to synchronize the predicted tone information to audio host software of the system, control an electric tone plug-in loaded in the audio host software, and generate an electric tone effect matched with the target song based on the song key corresponding to each playing time in the target time period.
In one embodiment, the target audio clip obtaining module 401 includes:
a target playing time determining sub-module, configured to determine a target playing time according to the current playing time of the target song and the preset time range;
And the target audio fragment obtaining submodule is used for taking the audio fragment from the current playing time to the target playing time in the target song as the target audio fragment.
In one embodiment, the predictive key information determination module 402 includes:
the audio data acquisition sub-module is used for acquiring audio data corresponding to each playing time in the target audio fragment; the audio data includes a plurality of audio frames of the target song;
and the song key prediction sub-module is used for obtaining the song key corresponding to each playing time according to the key corresponding to each audio frame in each playing time and taking the song key as the prediction key information corresponding to the target time period.
In one embodiment, the apparatus further comprises:
and the first segment acquisition module is used for acquiring the target audio segment of the target song in the next target time period as the latest acquired target audio segment after the end of the playing of the target time period corresponding to the latest acquired target audio segment is detected in the continuous playing process of the target song.
In one embodiment, the apparatus further comprises:
and the second segment acquisition module is used for acquiring the target audio segment of the target song in the next target time period as the latest acquired target audio segment when the generation of corresponding prediction key information for the latest acquired target audio segment is detected in the continuous playing process of the target song.
In one embodiment, the electric tone control module 403 includes:
the tone change information determining submodule is used for determining tone change information of the target song in a preset time range after the current playing time according to the song key corresponding to each playing time in the target time period;
and the electric sound effect generation sub-module is used for generating electric sound effects matched with the target song according to the playing progress of the target song by adopting the tone change information.
In one embodiment, the apparatus further comprises:
the playing progress display module is used for displaying the playing progress of the target song in a first area of the playing display interface;
and the prediction key display module is used for displaying the song key corresponding to each playing moment and the key associated information thereof in the target time period corresponding to the target audio fragment in the second area of the playing display interface.
The above-described individual modules in the electric tone control apparatus may be implemented in whole or in part by software, hardware, and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure of which may be as shown in fig. 5. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a method of electric tone base control. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 5 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of:
when the playing operation of the system for the target song is detected, acquiring a target audio fragment of the target song in a target time period; the target time period is a time period in a preset time range after the current playing time;
according to the target audio fragment, determining prediction key information corresponding to the target time period; the predicted key information is used for representing the key of the song corresponding to each playing moment in the target time period;
and synchronizing the predicted key information to audio host software of the system, controlling an electric sound plug-in unit loaded in the audio host software, and generating electric sound effects matched with the target song based on song keys corresponding to playing moments in the target time period.
In one embodiment, the processor, when executing the computer program, also implements the steps of the electric tone control method in the other embodiments described above.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
when the playing operation of the system for the target song is detected, acquiring a target audio fragment of the target song in a target time period; the target time period is a time period in a preset time range after the current playing time;
according to the target audio fragment, determining prediction key information corresponding to the target time period; the predicted key information is used for representing the key of the song corresponding to each playing moment in the target time period;
and synchronizing the predicted key information to audio host software of the system, controlling an electric sound plug-in unit loaded in the audio host software, and generating electric sound effects matched with the target song based on song keys corresponding to playing moments in the target time period.
In one embodiment, the computer program when executed by the processor also implements the steps of the electric tone control method in the other embodiments described above.
In one embodiment, a computer program product is provided comprising a computer program which, when executed by a processor, performs the steps of:
when the playing operation of the system for the target song is detected, acquiring a target audio fragment of the target song in a target time period; the target time period is a time period in a preset time range after the current playing time;
according to the target audio fragment, determining prediction key information corresponding to the target time period; the predicted key information is used for representing the key of the song corresponding to each playing moment in the target time period;
and synchronizing the predicted key information to audio host software of the system, controlling an electric sound plug-in unit loaded in the audio host software, and generating electric sound effects matched with the target song based on song keys corresponding to playing moments in the target time period.
In one embodiment, the computer program when executed by the processor also implements the steps of the electric tone control method in the other embodiments described above.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (10)

1. A method of tone control, the method comprising:
when the playing operation of the system for the target song is detected, acquiring a target audio fragment of the target song in a target time period; the target time period is a time period in a preset time range after the current playing time;
according to the target audio fragment, determining prediction key information corresponding to the target time period; the predicted key information is used for representing the key of the song corresponding to each playing moment in the target time period;
And synchronizing the predicted key information to audio host software of the system, controlling an electric sound plug-in unit loaded in the audio host software, and generating electric sound effects matched with the target song based on song keys corresponding to playing moments in the target time period.
2. The method of claim 1, wherein the obtaining the target audio clip for the target song for the target time period comprises:
determining a target playing time according to the current playing time of the target song and the preset time range;
and taking the audio frequency fragment from the current playing time to the target playing time in the target song as the target audio frequency fragment.
3. The method of claim 1, wherein determining, from the target audio segment, predicted key information corresponding to the target time period, comprises:
acquiring audio data corresponding to each playing time in the target audio fragment; the audio data includes a plurality of audio frames of the target song;
and obtaining the song key corresponding to each playing time according to the key corresponding to each audio frame in each playing time, and taking the song key as the prediction key information corresponding to the target time period.
4. The method according to claim 1, wherein the method further comprises:
and in the continuous playing process of the target song, after the end of the playing of the target time period corresponding to the latest acquired target audio fragment is detected, acquiring the target audio fragment of the target song in the next target time period as the latest acquired target audio fragment.
5. The method according to claim 1, wherein the method further comprises:
and in the continuous playing process of the target song, when the generation of corresponding prediction key information aiming at the latest acquired target audio fragment is detected, acquiring the target audio fragment of the target song in the next target time period as the latest acquired target audio fragment.
6. The method of claim 1, wherein controlling the electrical plug-in loaded in the audio host software to generate electrical sound effects matching the target song based on the song key corresponding to each playing time in the target time period comprises:
according to the song basic tone corresponding to each playing time in the target time period, determining tone change information of the target song in a preset time range after the current playing time;
And generating the electric sound effect matched with the target song according to the playing progress of the target song by adopting the tone change information.
7. The method according to any one of claims 1 to 6, further comprising:
displaying the playing progress of the target song in a first area of a playing display interface;
and displaying the song key corresponding to each playing time and the key related information thereof in the target time period corresponding to the target audio fragment in a second area of the playing display interface.
8. An electric tone control device, the device comprising:
the target audio fragment acquisition module is used for acquiring a target audio fragment of a target song in a target time period when the play operation of the system for the target song is detected; the target time period is a time period in a preset time range after the current playing time;
the prediction key information determining module is used for determining prediction key information corresponding to the target time period according to the target audio fragment; the predicted key information is used for representing the key of the song corresponding to each playing moment in the target time period;
and the electric tone control module is used for synchronizing the prediction tone information to the audio host software of the system, controlling an electric tone plug-in unit loaded in the audio host software, and generating electric tone effects matched with the target song based on the song tone corresponding to each playing moment in the target time period.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
CN202310280808.9A 2023-03-22 2023-03-22 Electric tone key control method, apparatus, computer device, and storage medium Active CN116312431B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310280808.9A CN116312431B (en) 2023-03-22 2023-03-22 Electric tone key control method, apparatus, computer device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310280808.9A CN116312431B (en) 2023-03-22 2023-03-22 Electric tone key control method, apparatus, computer device, and storage medium

Publications (2)

Publication Number Publication Date
CN116312431A true CN116312431A (en) 2023-06-23
CN116312431B CN116312431B (en) 2023-11-24

Family

ID=86802882

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310280808.9A Active CN116312431B (en) 2023-03-22 2023-03-22 Electric tone key control method, apparatus, computer device, and storage medium

Country Status (1)

Country Link
CN (1) CN116312431B (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE3102643A1 (en) * 1981-01-27 1982-08-05 Matth. Hohner Ag, 7218 Trossingen "CIRCUIT ARRANGEMENT FOR AN ELECTRONIC MUSIC INSTRUMENT"
US5416265A (en) * 1992-10-20 1995-05-16 Kabushiki Kaisha Kawai Gakki Seisakusho Sound effect-creating device
WO2011001701A1 (en) * 2009-06-30 2011-01-06 本田技研工業株式会社 Sound effect generating device
DE102010044842A1 (en) * 2010-09-07 2012-03-08 Ilja Dzampajev Clavier for e.g. electronic keyboard instrument, has force component converted into control signal for unit for producing periodic fluctuation of tone pitch and/or into preadjusted tone and/or sound effect and/or into control command
CN107808655A (en) * 2017-10-27 2018-03-16 广州酷狗计算机科技有限公司 Acoustic signal processing method, device, electronic equipment and storage medium
CN107910018A (en) * 2017-10-30 2018-04-13 广州视源电子科技股份有限公司 Sound effect treatment method and system, computer-readable storage medium and equipment
CN108010512A (en) * 2017-12-05 2018-05-08 广东小天才科技有限公司 The acquisition methods and recording terminal of a kind of audio
CN208477906U (en) * 2018-06-02 2019-02-05 江西创成微电子有限公司 A kind of apparatus for processing audio
CN109389988A (en) * 2017-08-08 2019-02-26 腾讯科技(深圳)有限公司 Audio adjusts control method and device, storage medium and electronic device
CN112201263A (en) * 2020-10-16 2021-01-08 广州资云科技有限公司 Electric tone adjusting system based on song recognition
CN113641329A (en) * 2021-08-10 2021-11-12 广州艾美网络科技有限公司 Sound effect configuration method and device, intelligent sound box, computer equipment and storage medium
CN113689837A (en) * 2021-08-24 2021-11-23 北京百度网讯科技有限公司 Audio data processing method, device, equipment and storage medium
JP2022139888A (en) * 2021-03-12 2022-09-26 パイオニア株式会社 Information processing device
CN115119110A (en) * 2022-07-29 2022-09-27 歌尔科技有限公司 Sound effect adjusting method, audio playing device and computer readable storage medium
WO2022214096A1 (en) * 2021-04-09 2022-10-13 广州视源电子科技股份有限公司 Method and apparatus for acquiring personalized sound effect parameters, and device and storage medium

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE3102643A1 (en) * 1981-01-27 1982-08-05 Matth. Hohner Ag, 7218 Trossingen "CIRCUIT ARRANGEMENT FOR AN ELECTRONIC MUSIC INSTRUMENT"
US5416265A (en) * 1992-10-20 1995-05-16 Kabushiki Kaisha Kawai Gakki Seisakusho Sound effect-creating device
WO2011001701A1 (en) * 2009-06-30 2011-01-06 本田技研工業株式会社 Sound effect generating device
DE102010044842A1 (en) * 2010-09-07 2012-03-08 Ilja Dzampajev Clavier for e.g. electronic keyboard instrument, has force component converted into control signal for unit for producing periodic fluctuation of tone pitch and/or into preadjusted tone and/or sound effect and/or into control command
CN109389988A (en) * 2017-08-08 2019-02-26 腾讯科技(深圳)有限公司 Audio adjusts control method and device, storage medium and electronic device
CN107808655A (en) * 2017-10-27 2018-03-16 广州酷狗计算机科技有限公司 Acoustic signal processing method, device, electronic equipment and storage medium
CN107910018A (en) * 2017-10-30 2018-04-13 广州视源电子科技股份有限公司 Sound effect treatment method and system, computer-readable storage medium and equipment
CN108010512A (en) * 2017-12-05 2018-05-08 广东小天才科技有限公司 The acquisition methods and recording terminal of a kind of audio
CN208477906U (en) * 2018-06-02 2019-02-05 江西创成微电子有限公司 A kind of apparatus for processing audio
CN112201263A (en) * 2020-10-16 2021-01-08 广州资云科技有限公司 Electric tone adjusting system based on song recognition
JP2022139888A (en) * 2021-03-12 2022-09-26 パイオニア株式会社 Information processing device
WO2022214096A1 (en) * 2021-04-09 2022-10-13 广州视源电子科技股份有限公司 Method and apparatus for acquiring personalized sound effect parameters, and device and storage medium
CN113641329A (en) * 2021-08-10 2021-11-12 广州艾美网络科技有限公司 Sound effect configuration method and device, intelligent sound box, computer equipment and storage medium
CN113689837A (en) * 2021-08-24 2021-11-23 北京百度网讯科技有限公司 Audio data processing method, device, equipment and storage medium
CN115119110A (en) * 2022-07-29 2022-09-27 歌尔科技有限公司 Sound effect adjusting method, audio playing device and computer readable storage medium

Also Published As

Publication number Publication date
CN116312431B (en) 2023-11-24

Similar Documents

Publication Publication Date Title
US10955984B2 (en) Step sequencer for a virtual instrument
US9613605B2 (en) Method, device and system for automatically adjusting a duration of a song
CN110324718A (en) Audio-video generation method, device, electronic equipment and readable medium
EP2905773B1 (en) Method of playing music based on chords and electronic device implementing the same
CN115691544A (en) Training of virtual image mouth shape driving model and driving method, device and equipment thereof
US20190051272A1 (en) Audio editing and publication platform
CN104681048A (en) Multimedia read control device, curve acquiring device, electronic equipment and curve providing device and method
CN109410972A (en) Generate the method, apparatus and storage medium of sound effect parameters
CN116312431B (en) Electric tone key control method, apparatus, computer device, and storage medium
US9202447B2 (en) Persistent instrument
CN113821189B (en) Audio playing method, device, terminal equipment and storage medium
CN116312430B (en) Electric tone key control method, apparatus, computer device, and storage medium
JP2018507503A (en) Music providing method and music providing system
CN115186127A (en) Audio playing method and device based on sleep state and computer equipment
US20220068248A1 (en) Method and device for displaying music score in target music video
US20140282004A1 (en) System and Methods for Recording and Managing Audio Recordings
KR102058025B1 (en) Electronic device for extracting a highlight section of a sound source and method thereof
KR101554662B1 (en) Method for providing chord for digital audio data and an user terminal thereof
CN113781989A (en) Audio animation playing and rhythm stuck point identification method and related device
CN113766307A (en) Techniques for audio track analysis to support audio personalization
CN112685000A (en) Audio processing method and device, computer equipment and storage medium
US20200097249A1 (en) Computer apparatus for playback of audio
KR20180098027A (en) Electronic device and method for implementing music-related application
CN113438547B (en) Music generation method and device, electronic equipment and storage medium
US20170330544A1 (en) Method and system for creating an audio composition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant