CN113590076A - Audio processing method and device - Google Patents

Audio processing method and device Download PDF

Info

Publication number
CN113590076A
CN113590076A CN202110782371.XA CN202110782371A CN113590076A CN 113590076 A CN113590076 A CN 113590076A CN 202110782371 A CN202110782371 A CN 202110782371A CN 113590076 A CN113590076 A CN 113590076A
Authority
CN
China
Prior art keywords
audio
mode
playing
interface
added
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110782371.XA
Other languages
Chinese (zh)
Other versions
CN113590076B (en
Inventor
朱一闻
谢劲松
阚方邑
龙一歌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Netease Cloud Music Technology Co Ltd
Original Assignee
Hangzhou Netease Cloud Music Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Netease Cloud Music Technology Co Ltd filed Critical Hangzhou Netease Cloud Music Technology Co Ltd
Priority to CN202110782371.XA priority Critical patent/CN113590076B/en
Publication of CN113590076A publication Critical patent/CN113590076A/en
Application granted granted Critical
Publication of CN113590076B publication Critical patent/CN113590076B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B19/00Driving, starting, stopping record carriers not specifically of filamentary or web form, or of supports therefor; Control thereof; Control of operating function ; Driving both disc and head
    • G11B19/02Control of operating function, e.g. switching from recording to reproducing
    • G11B19/16Manual control

Abstract

The present disclosure relates to the field of audio processing technologies, and in particular, to an audio processing method and apparatus, wherein when detecting that a gesture of a first terminal satisfies a preset condition, an audio playing interface where the first terminal is located is switched from a first mode to a disc playing mode; when the audio playing interface is in a disc playing mode, determining audio to be added; and fusing the audio to be added with the source audio in the first mode. Like this, through the gesture that detects first terminal for the audio playback interface at first terminal place switches into the mode of playing the dish from first mode, thereby can play the dish at the in-process of music broadcast, not only can improve the efficiency to audio frequency processing, can also promote user's experience and feel.

Description

Audio processing method and device
Technical Field
The present disclosure relates to the field of audio processing technologies, and in particular, to an audio processing method and apparatus.
Background
The disk player is a device for carrying out on-site performance by a DJ, and can connect two different pieces of music or more songs together, thereby realizing special control of the music and driving the performance atmosphere.
In the related art, generally, the disc playing operation can only be performed in the terminal through an independent application program, but since the method in the related art needs to introduce music into the independent application program and then perform the disc playing process, the disc playing process cannot be performed on the currently played music in the music playing process, so that the efficiency of the audio processing is reduced, and the experience of the user is reduced.
Disclosure of Invention
The embodiment of the disclosure provides an audio processing method and an audio processing device, so as to improve the efficiency of audio processing and improve the experience of a user.
The specific technical scheme provided by the embodiment of the disclosure is as follows:
an audio processing method, comprising:
when the gesture of the first terminal is detected to meet a preset condition, switching an audio playing interface where the first terminal is located from a first mode to a disc playing mode;
when the audio playing interface is in a disc playing mode, determining audio to be added;
and fusing the audio to be added with the source audio in the first mode.
Optionally, when the gesture of the first terminal is detected to meet the preset condition, the audio playing interface where the first terminal is located is switched from the first mode to the disc playing mode, which specifically includes:
obtaining a deflection angle;
and if the deflection angle meets the preset condition, switching the audio playing interface from a first mode to a disc playing mode.
Optionally, if it is determined that the deflection angle satisfies the preset condition, switching the audio playing interface from the first mode to a disc playing mode, specifically including:
and if the deflection angle reaches the first angle threshold value, controlling a target object in the audio playing interface to move towards a preset direction until the target object is moved to a preset target position, and switching the audio playing interface from a first mode to a disc playing mode.
Optionally, the method further includes:
and in the process that the target object moves to the target position, hiding an operation control in the audio playing interface and/or increasing the image area of the target object.
Optionally, determining the audio to be added specifically includes:
taking the audio recommended by the system as the audio to be added; or the like, or, alternatively,
and taking the audio selected from the audio playing interface as the audio to be added.
Optionally, the merging the audio to be added with the source audio in the first mode specifically includes:
determining the audio playing speed of the source audio and the audio playing speed of the audio to be added;
adjusting the audio playing speed of the source audio and the audio playing speed of the audio to be added;
and carrying out fusion processing on the adjusted source audio and the adjusted audio to be added.
Optionally, the merging the audio to be added with the source audio in the first mode specifically includes:
extracting the beat information of the source audio and the beat information of the audio to be added;
adjusting the beat information of the source audio and the beat information of the audio to be added;
and carrying out fusion processing on the adjusted source audio and the adjusted audio to be added.
Optionally, after the audio to be added and the source audio in the first mode are fused, the method further includes:
acquiring a percussion pad sampling audio;
and adding the pad sampling audio to the pre-processing audio obtained after the fusion processing.
Optionally, the obtaining of the pad sample audio specifically includes:
establishing a connection with a second terminal;
pad sample audio is acquired from the second terminal through the connection.
Optionally, after the audio to be added and the source audio in the first mode are fused, the method further includes:
sending the pre-processing audio after the fusion processing to other terminals;
receiving a target audio sent by the other terminal, wherein the target audio is obtained after the other terminal receives the preprocessed audio and processes the preprocessed audio in a preset processing mode;
and playing the target audio.
Optionally, the method further includes:
detecting a recording instruction, and carrying out interface recording on the audio playing interface to obtain an interface recording video;
acquiring and obtaining a front video through first image acquisition equipment and/or acquiring and obtaining a rear video through second image acquisition equipment;
and simultaneously displaying an interface recorded video, the front video and/or the rear video on the audio playing interface.
Optionally, the method further includes:
executing corresponding playing operation on the interface recorded video, the front video and/or the rear video, wherein the playing operation at least comprises any one of the following operations: switching, dragging, amplifying and stacking between pictures.
Optionally, the method further includes:
and displaying a cover image corresponding to the target audio obtained after the fusion processing on the audio playing interface.
Optionally, the method further includes:
and displaying the special effect video corresponding to the target audio obtained after the fusion processing on the audio playing interface.
Optionally, after the audio playing interface where the first terminal is located is switched from the first mode to the disc playing mode, the method further includes:
and if the dish beating processing instruction is not detected in a preset time period and/or the posture of the first terminal is detected to meet a preset posture condition, switching from the dish beating mode to the first mode.
Optionally, after the audio to be added and the source audio in the first mode are fused, the method further includes:
and when an interface conversion instruction is detected, controlling a target object in the audio playing interface to return to an original position, wherein the original position represents the position information of the target object when the first terminal is in the first mode.
An audio processing apparatus comprising:
the first switching module is used for switching the audio playing interface where the first terminal is located from a first mode to a disc playing mode when detecting that the posture of the first terminal meets a preset condition;
the determining module is used for determining audio to be added when the audio playing interface is in a disc playing mode;
and the first processing module is used for fusing the audio to be added with the source audio in the first mode.
Optionally, the first switching module is specifically configured to:
obtaining a deflection angle;
and if the deflection angle meets the preset condition, switching the audio playing interface from a first mode to a disc playing mode.
Optionally, if it is determined that the deflection angle satisfies the preset condition, when the audio playing interface is switched from the first mode to the disc playing mode, the first switching module is specifically configured to:
and if the deflection angle reaches the first angle threshold value, controlling a target object in the audio playing interface to move towards a preset direction until the target object is moved to a preset target position, and switching the audio playing interface from a first mode to a disc playing mode.
Optionally, the first switching module is further configured to:
and in the process that the target object moves to the target position, hiding an operation control in the audio playing interface and/or increasing the image area of the target object.
Optionally, when determining that the audio is to be added, the determining module is specifically configured to:
taking the audio recommended by the system as the audio to be added; or the like, or, alternatively,
and taking the audio selected from the audio playing interface as the audio to be added.
Optionally, the first processing module is specifically configured to:
determining the audio playing speed of the source audio and the audio playing speed of the audio to be added;
adjusting the audio playing speed of the source audio and the audio playing speed of the audio to be added;
and carrying out fusion processing on the adjusted source audio and the adjusted audio to be added.
Optionally, the first processing module is specifically configured to:
extracting the beat information of the source audio and the beat information of the audio to be added;
adjusting the beat information of the source audio and the beat information of the audio to be added;
and carrying out fusion processing on the adjusted source audio and the adjusted audio to be added.
Optionally, after the audio to be added and the source audio in the first mode are fused, the method further includes:
the acquisition module is used for acquiring the sampling audio of the pad;
and the adding module is used for adding the pad sampling audio to the pre-processed audio obtained after the fusion processing.
Optionally, the obtaining module is specifically configured to:
establishing a connection with a second terminal;
pad sample audio is acquired from the second terminal through the connection.
Optionally, after the audio to be added and the source audio in the first mode are fused, the method further includes:
the sending module is used for sending the preprocessed audio subjected to the fusion processing to other terminals;
the receiving module is used for receiving a target audio sent by the other terminal, wherein the target audio is obtained by processing the preprocessed audio in a preset processing mode after the other terminal receives the preprocessed audio;
and the audio playing module is used for playing the target audio.
Optionally, the apparatus further comprises:
the detection module is used for detecting a recording instruction and carrying out interface recording on the audio playing interface to obtain an interface recording video;
the acquisition module is used for acquiring and obtaining a front video through first image acquisition equipment and/or acquiring and obtaining a rear video through second image acquisition equipment;
and the first display module is used for simultaneously displaying interface recorded videos, the front videos and/or the rear videos on the audio playing interface.
Optionally, the apparatus further comprises:
a second processing module, configured to perform a corresponding play operation on the interface recorded video, the front-end video, and/or the back-end video, where the play operation at least includes any one of: switching, dragging, amplifying and stacking between pictures.
Optionally, the apparatus further comprises:
and the second display module is used for displaying the cover image corresponding to the target audio obtained after the fusion processing on the audio playing interface.
Optionally, the apparatus further comprises:
and the third display module is used for displaying the special effect video corresponding to the target audio obtained after the fusion processing on the audio playing interface.
Optionally, after the audio playing interface where the first terminal is located is switched from the first mode to the disc playing mode, the method further includes:
the second switching module is used for switching from the dish playing mode to the first mode if it is determined that the dish playing processing instruction is not detected within a preset time period and/or the posture of the first terminal meets a preset posture condition.
Optionally, after the audio to be added and the source audio in the first mode are fused, the method further includes:
and the control module is used for controlling a target object in the audio playing interface to return to an original position when an interface conversion instruction is detected, wherein the original position represents the position information of the target object when the first terminal is in the first mode.
An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the audio processing method when executing the program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned audio processing method.
In the embodiment of the disclosure, when the gesture of the first terminal is detected to meet the preset condition, the audio playing interface where the first terminal is located is switched from the first mode to the disc playing mode, when the audio playing interface is in the disc playing mode, the audio to be added is determined, and the audio to be added and the source audio in the first mode are subjected to fusion processing. Like this, when the gesture that detects first terminal satisfies when predetermineeing the condition, switch the mode at audio playback interface to audio fusion can beat the dish at the in-process of music broadcast, need not to beat the dish APP through the independence, just can realize beating the dish and handle, thereby can promote user's experience and feel. In addition, the source audio is the audio currently played by the first terminal, and the audio to be added is the audio obtained by automatic matching according to the source audio, so that the audio does not need to be manually led in, and the audio processing efficiency can be improved.
Drawings
FIG. 1 is a flow chart of an audio processing method in an embodiment of the present disclosure;
FIG. 2a is a schematic interface diagram of a first mode in an embodiment of the present disclosure;
FIG. 2b is a schematic view of an interface in a rotated state according to an embodiment of the disclosure;
FIG. 2c is a schematic interface diagram illustrating a disc-playing mode according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of an interface for recommending audio in an embodiment of the present disclosure;
fig. 4 is a schematic interface diagram of beat information synchronization in the embodiment of the present disclosure;
FIG. 5a is a schematic view of an interface showing a pad according to an embodiment of the present disclosure;
FIG. 5b is a schematic diagram of an interface for sound effect selection according to an embodiment of the present disclosure;
FIG. 6a is a schematic diagram of an interface for starting recording of a pad sample audio according to an embodiment of the disclosure;
fig. 6b is a schematic interface diagram in recording in an embodiment of the present disclosure;
fig. 6c is a schematic view of an interface for completing recording in the embodiment of the present disclosure;
FIG. 6d is a schematic diagram of an interface for recording next pad sample audio according to an embodiment of the disclosure;
FIG. 7 is a schematic interface diagram of a second terminal in an embodiment of the disclosure;
FIG. 8 is a schematic diagram of an interface for sound effect processing according to an embodiment of the present disclosure;
FIG. 9a is a schematic interface diagram of an audio playback interface in an embodiment of the disclosure;
FIG. 9b is a schematic diagram of an interface for saving published videos in an embodiment of the present disclosure;
FIG. 10a is a schematic diagram of an interface for turning on a camera in an embodiment of the present disclosure;
fig. 10b is a schematic view of an interface where a terminal camera is currently turned on in the embodiment of the present disclosure;
FIG. 10c is a schematic view of an interface for video playing of other terminals in the embodiment of the present disclosure;
FIG. 10d is a schematic diagram of an interface for minimizing a frame in an embodiment of the present disclosure;
FIG. 10e is a schematic diagram of an interface for stacking multiple frames according to an embodiment of the disclosure;
FIG. 11 is another flowchart of an audio processing method according to an embodiment of the disclosure;
FIG. 12 is a schematic structural diagram of an audio processing apparatus according to an embodiment of the disclosure;
fig. 13 is a schematic structural diagram of an electronic device in an embodiment of the disclosure.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only some embodiments of the present disclosure, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
The disk player is a device for carrying out on-site performance by a DJ, and can connect two different pieces of music or more songs together, thereby realizing special control of the music and driving the performance atmosphere.
With the rapid development of terminals such as tablet computers and player software thereof, simple functions related to disc playing can be realized through the player software, and therefore, in the related art, corresponding disc playing operations can be performed through independent application programs, but since the method in the related art needs to introduce music into the independent application programs for disc playing processing, disc playing processing cannot be performed on currently playing music in the music playing process, the efficiency of audio processing can be reduced, and a user needs to manually enter a disc playing mode, and cannot switch from a music playing state to a disc playing mode, so that the experience of the user can be reduced.
In the embodiment of the disclosure, when the gesture of the first terminal is detected to meet the preset condition, the audio playing interface where the first terminal is located is switched from the first mode to the disc playing mode, when the audio playing interface is in the disc playing mode, the audio to be added is determined, and the audio to be added and the source audio in the first mode are subjected to fusion processing. Like this, when the gesture that detects first terminal satisfies when predetermineeing the condition, then get into and beat the dish mode, like this, easy operation is swift, very big promotion the audio frequency the treatment effeciency to, need not the manual dish mode of beating of entering of user, can make the user beat the dish at the in-process of music broadcast and handle, thereby can promote user's experience and feel.
Based on the above embodiment, referring to fig. 1, a flowchart of an audio processing method in an embodiment of the present disclosure is shown, which specifically includes:
step 100: and when the gesture of the first terminal is detected to meet the preset condition, switching the audio playing interface where the first terminal is located from the first mode to a disc playing mode.
In the embodiment of the disclosure, the first terminal detects the current posture in real time, and when the posture of the first terminal is detected to meet the preset condition, the audio playing interface displayed at the moment by the first terminal is switched from the first mode to the disc playing mode, so that the first terminal enters the disc playing mode and displays the audio playing interface in the disc playing mode, and a user can play a disc through the audio playing interface in the disc playing mode.
The method and the device for detecting the gesture of the first terminal in the embodiment of the disclosure provide a possible implementation manner, the gesture of the first terminal can be determined by determining the deflection angle of the first terminal, and then whether the audio playing interface needs to be switched from the first mode to the disc playing mode is judged according to the deflection angle. The following describes in detail the step of switching the audio playing interface from the first mode to the disc playing mode in the embodiment of the present disclosure, and when the step 100 is executed, the method specifically includes:
s1001: and acquiring a deflection angle.
In the embodiment of the present disclosure, the angular motion detection module disposed in the first terminal detects the deflection angle of the first terminal, so as to obtain the deflection angle of the first terminal.
The angular motion detection module is configured to detect a current deflection angle of the first terminal, and the angular motion detection module may be, for example, a physical gyroscope, which is not limited in this embodiment of the present disclosure.
The deflection angle represents an included angle between a vertical center line before the audio playing interface deflects and a side edge of the current audio playing interface.
It should be noted that, when the first terminal is in the vertical screen state, the audio playing interface is in the first mode, and the deflection angle of the first terminal is 0 °. Referring to fig. 2a, which is an interface schematic diagram of a first mode in an embodiment of the present disclosure, when the first terminal is in a vertical screen state, a deflection angle obtained through the physical gyroscope is 0 °, and an audio playing interface displayed on the first terminal is in the first mode, in which a user can play a song through the audio playing interface.
In addition, it should be noted that, when the user rotates the first terminal from the deflection angle of 0 °, the angular motion detection module disposed in the first terminal obtains the deflection angle in real time, and determines whether the obtained deflection angle satisfies a preset condition.
S1002: and if the deflection angle meets the preset condition, switching the audio playing interface from the first mode to a disc playing mode.
In the embodiment of the disclosure, the deflection angle of the first terminal is obtained in real time, whether the deflection angle of the first terminal meets the preset condition is judged, and if the obtained deflection angle meets the preset condition, the audio playing interface is switched from the first mode to the disc playing mode.
In order to provide better experience for users, various different preset conditions can be set according to the size of the deflection angle so as to trigger the audio playing interface to display different contents. Three different preset conditions set in the embodiments of the present disclosure are explained in detail below.
First preset condition: the deflection angle is 45 °.
In this case, the user rotates the first terminal, so that the angular motion detection module obtains the deflection angle of the first terminal in real time, and when the deflection angle reaches 45 °, the target object of the audio playing interface is controlled to execute corresponding operation.
Wherein, when the deflection angle reaches 45 °, the corresponding operation may be the control target object lifting.
For example, referring to fig. 2b, which is a schematic diagram of an interface in a rotating state according to an embodiment of the present disclosure, assuming that the target object is a black glue stylus in an audio playing interface, when it is determined that the deflection angle reaches 45 °, the black glue stylus in the audio playing interface is controlled to be lifted, and at this time, there is no contact between the black glue stylus displayed in the audio playing interface and the black glue record.
It should be noted that the deflection angle required for controlling the lift of the black rubber stylus is not limited to 45 °, for example, it may also be 30 °, which is not limited in the embodiment of the present disclosure.
In addition, it should be noted that the black rubber stylus may be controlled to be lifted up slowly with the increase of the deflection angle, and of course, the black rubber stylus may also be controlled to be lifted up instantly when the deflection angle reaches 45 °, which is not limited in the embodiment of the present disclosure.
Second preset condition: the deflection angle reaches a first angle threshold and is less than a second angle threshold.
When step S1002 is executed, the method specifically includes:
and if the deflection angle reaches the first angle threshold value, controlling the target object in the audio playing interface to move towards the preset direction until the target object is moved to the preset target position, and switching the audio playing interface from the first mode to the disc playing mode.
In the embodiment of the disclosure, if the deflection angle of the first terminal reaches the first angle threshold but is smaller than the second angle threshold, the target object in the audio playing interface is controlled to move in the preset direction along with the increase of the deflection angle until the audio playing interface is switched from the first mode to the disc playing mode after the target object is moved to the preset target position.
The original position of the target object before moving may be, for example, a center position of the audio playing interface in the first mode, the preset direction may be, for example, a left movement, and the target position may be, for example, a left disc position of the audio playing interface in the disc playing mode.
For example, if the first angle threshold is 45 °, the second angle threshold is 90 °, the target object is a vinyl record displayed in the audio playing interface, when the audio playing interface is in the first mode, the vinyl record is located at the original position, when the user rotates the first terminal, the deflection angle of the first terminal is obtained in real time, and when the obtained deflection angle exceeds 45 °, the vinyl record is controlled to move leftward along with the increase of the deflection angle until the vinyl record is moved to the preset left disc position, and the audio playing interface is switched from the first mode to the disc playing mode.
Further, since the operation control and the target object are displayed in the audio playing interface, in the process of moving the target object to the preset target position, only the operation control may be controlled, also only the target object may be controlled, and of course, the operation control and the target object may also be controlled at the same time, and the following describes in detail steps of corresponding operations performed by the audio playing interface in the process of moving the target object to the target position in the embodiment of the present disclosure, and specifically includes:
and in the process of moving the target object to the target position, hiding an operation control in the audio playing interface and/or increasing the image area of the target object.
In the embodiment of the disclosure, because the operation control and the target object are displayed in the audio playing interface, in the process that the target object moves to the target position, the operation control and the target object in the audio playing interface can be controlled to execute different operations. The following describes the steps of controlling the audio playing interface in the embodiment of the present disclosure in three cases.
In the first case: only the operation control is controlled.
The method specifically comprises the following steps: and in the process of moving the target object to the target position, keeping the image area of the target object unchanged, and hiding the operation control in the audio playing interface, so that the operation control in the audio playing interface is in a hidden state.
The operation control may be, for example, start, pause, previous, next, and the like, which is not limited in the embodiment of the present disclosure.
For example, the deflection angle of the first terminal is acquired in real time, when the acquired deflection angle exceeds 45 °, the vinyl record is controlled to move leftward along with the increase of the deflection angle, and in the process of controlling the movement of the vinyl record, the "start" operation control, the "pause" operation control, the "previous" operation control and the "next" operation control in the audio playing interface are hidden.
In this case, although the target object is moved, the image area of the target object is always kept constant.
In the second case: only the image area of the target object is controlled.
The method specifically comprises the following steps: and in the process of moving the target object to the target position, keeping the operation control in the audio playing interface in a display state, and increasing the image area of the target object along with the increase of the deflection angle.
For example, the deflection angle of the first terminal is acquired in real time, when the acquired deflection angle exceeds 45 degrees, the vinyl record is controlled to move leftwards along with the increase of the deflection angle, in the process of controlling the movement of the vinyl record, the 'start' operation control, 'pause' operation control, 'previous' operation control and 'next' operation control in the audio playing interface are always kept in a display state, but the image area of the vinyl record is increased along with the increase of the deflection angle until the image area of the vinyl record reaches the preset size, and the increase of the image area of the vinyl record is stopped.
In this case, the image area of the target object may also be reduced with the increase of the deflection angle in the process of moving the target object to the target position, which is not limited in the embodiment of the present disclosure.
In the third case: and simultaneously controlling the image areas of the operation control and the target object.
The method specifically comprises the following steps: and hiding the operation control in the audio playing interface and increasing the image area of the target object in the process of moving the target object to the target position.
In the embodiment of the disclosure, in the process that the target object moves to the target position, the operation control in the audio playing interface is hidden, so that the operation control in the audio playing interface is in a hidden state, and the image area of the target object is increased along with the increase of the deflection angle.
For example, the deflection angle of the first terminal is acquired in real time, when the acquired deflection angle exceeds 45 degrees, the vinyl record is controlled to move leftwards along with the increase of the deflection angle, in the process of controlling the movement of the vinyl record, a 'start' operation control, a 'pause' operation control, a 'previous' operation control and a 'next' operation control in an audio playing interface are hidden, and the image area of the vinyl record is increased along with the increase of the deflection angle until the image area of the vinyl record reaches a preset size, and the increase of the image area of the vinyl record is stopped.
The third preset condition: the deflection angle reaches a second angle threshold.
In the embodiment of the disclosure, when the deflection angle reaches the second angle threshold, the first terminal is in a horizontal screen state and enters a disc playing mode.
For example, if the second angle threshold is 90 °, referring to fig. 2c, the second angle threshold is 90 °, which is an interface diagram of the disc playing mode in the embodiment of the present disclosure, and at this time, the deflection angle of the first terminal is 90 °, it is determined that the deflection angle reaches the second angle threshold, the first terminal is in the horizontal screen state, and the disc playing mode is entered.
Further, in the embodiment of the present disclosure, after the audio playing interface is switched from the first mode to the disc playing mode, if the user does not trigger the generation of the disc playing processing instruction in the audio playing interface within the preset time, the audio playing interface may be switched back to the first mode from the disc playing mode, and of course, the audio playing interface may also be switched back to the first mode from the disc playing mode when it is detected that the posture of the first terminal meets the preset posture condition. The following describes in detail the step of switching from the disc playing mode to the first mode in the embodiment of the present disclosure, which specifically includes:
and if the dish beating processing instruction is not detected in the preset time period and/or the posture of the first terminal is detected to meet the preset posture condition, switching from the dish beating mode to the first mode.
In the embodiment of the present disclosure, when the disc playing mode is switched back to the first mode, the following three situations can be specifically distinguished:
the first mode is as follows: no dishing processing instruction is detected.
The method specifically comprises the following steps: and if the dish beating processing instruction is not detected within the preset time period, switching from the dish beating mode to the first mode.
In the embodiment of the disclosure, firstly, a time period is preset, then, after the audio playing interface is switched from the disc playing mode to the first mode, timing is started, whether a disc playing processing instruction is obtained is detected in real time, if the disc playing processing instruction is detected within the preset time period, corresponding disc playing processing is performed according to the disc playing processing instruction, if the disc playing processing instruction is not detected within the preset time period, it is determined that a user does not execute disc playing operation, and therefore, the audio playing interface is switched from the disc playing mode to the first mode.
The second mode is as follows: the posture of the first terminal meets a preset posture condition.
The method specifically comprises the following steps: and when the posture of the first terminal is detected to meet the preset posture condition, switching from the disc playing mode to the first mode.
In the embodiment of the disclosure, whether the posture of the first terminal meets the preset posture condition is detected in real time, and if the posture of the first terminal meets the preset posture condition, the disc playing mode is switched to the first mode.
The preset posture condition may be, for example, a third angle threshold, which is not limited in the embodiment of the present disclosure.
For example, when the preset posture condition is the third angle threshold, and when the first terminal is in the disc playing mode, the obtained deflection angle is 90 degrees, the deflection angle of the first terminal is detected in real time, and if it is determined that the deflection angle of the first terminal reaches 0 degree at the moment, the disc playing mode is switched to the first mode.
Further, in the embodiment of the present disclosure, after the command to stop playing the disc is detected, the current disc playing mode may be switched to the first mode, which is not limited in the embodiment of the present disclosure.
The third mode is as follows: and the dish-beating processing instruction is not detected, and the posture of the first terminal meets the preset posture condition.
The method specifically comprises the following steps: and if the dish beating processing instruction is determined not to be detected within the preset time period and the posture of the first terminal is detected to meet the preset posture condition, switching from the dish beating mode to the first mode.
In the embodiment of the disclosure, after the audio playing interface is switched from the first mode to the disc playing mode, timing is started, whether a disc playing processing instruction is obtained is detected in real time, if it is determined that the disc playing processing instruction is not detected within a preset time period, it is determined that a user does not execute a disc playing operation, and meanwhile, when it is detected that the deflection angle of the first terminal meets a third angle threshold, the audio playing interface is switched from the disc playing mode to the first mode.
Step 110: and when the audio playing interface is in the disc playing mode, determining audio to be added.
In the embodiment of the disclosure, when it is determined that the audio playing interface is in the disc playing mode, the user may start disc playing processing, and thus, determine the audio to be added corresponding to the source audio currently being played by the first terminal.
In order to enable the user to have a better experience, when determining the audio to be added, the audio may be recommended by the system, and of course, the user may also select the audio by himself, and two possible implementations of determining the audio to be added in the embodiment of the present disclosure are described in detail below.
The first mode is as follows: and (5) system recommendation.
When determining that the audio is to be added, the method specifically includes:
and taking the audio recommended by the system as the audio to be added.
In the embodiment of the disclosure, the system of the first terminal may automatically identify the audio currently being played, and obtain the audio recommended by the system through matching according to the audio currently being played, so that the audio recommended by the system may be used as the audio to be added.
For example, as shown in fig. 2c, a user may "select a song trial bar" by clicking an operation control in fig. 2c to trigger the system to recommend audio, and as shown in fig. 3, which is an interface diagram of recommended audio in the embodiment of the present disclosure, after the system is triggered to recommend audio, audio recommended by the system is displayed in an audio playing interface, and the audio recommended by the system is displayed in an audio playing interface in a list form, so that the first terminal may obtain audio to be added selected by the user by clicking corresponding audio by the user, for example, if the user selects the operation control "song name", the first terminal takes the song selected by the user as audio to be added.
It should be noted that when the audio recommended by the system is acquired, the audio recommended by the system can be detected according to the acquired preference information of the user and the preference information of the user; the number of beats of the source audio currently being played can be determined, and the audio with the same number of beats as the source audio is obtained through matching and is used as the audio recommended by the system; and high-quality audio can be stored in the system in advance, and the stored high-quality audio can be used as the audio recommended by the system, and of course, the audio recommended according to the song wind information of the source audio can be obtained and used as the audio recommended by the system.
The second mode is as follows: audio selected from an audio playback interface.
The method specifically comprises the following steps: and taking the audio selected from the audio playing interface as the audio to be added.
In the embodiment of the disclosure, when the first terminal enters the disc playing mode, the audio playing interface displays the audio list determined according to the user information at this time, and the user can select the corresponding audio from the audio list, so that the first terminal takes the audio selected from the audio playing interface as the audio to be added.
The audio list may be, for example, "music i like," and may also be, for example, "recently played," which is not limited in the embodiment of the present disclosure.
Step 120: and merging the audio to be added with the source audio in the first mode.
In the embodiment of the disclosure, the audio to be added and the source audio in the first mode are subjected to fusion processing, so that the target audio is obtained, and finally, the obtained target audio is played.
Specifically, when the audio to be added and the source audio are subjected to fusion processing, the audio to be added and the source audio obtained after the audio playing rate is adjusted may be subjected to fusion processing, and the audio to be added and the source audio obtained after the audio beat number is adjusted may also be subjected to fusion processing.
The first mode is as follows: and adjusting the audio playing speed.
Then, when the step 120 is executed, the method specifically includes:
s1201: the audio playing speed of the source audio and the audio playing speed of the audio to be added are determined.
In the embodiment of the disclosure, after the user selects the corresponding audio to be added in the audio playing interface, the operation object corresponding to the audio to be added is displayed at the position of the right disc in the audio playing interface, and at this time, the audio playing interface is switched to the fusion processing interface, so that the user can identify the audio playing rate of the source audio by clicking the 'synchronous' operation control in the switched interface, determine the audio playing rate of the source audio, and at the same time, identify the audio playing rate of the audio to be added, and determine the audio playing rate of the audio to be added.
S1202: and adjusting the audio playing speed of the source audio and the audio playing speed of the audio to be added.
In the embodiment of the disclosure, after the audio playing rate of the source audio and the audio playing rate of the audio to be added are determined, the audio playing rate of the source audio and the audio playing rate of the audio to be added are adjusted, so that the audio playing rate of the source audio is the same as the audio playing rate of the audio to be added.
When the audio playing rate of the source audio and the audio playing rate of the audio to be added are adjusted, the audio playing rate of the source audio can be adjusted, and the audio playing rate of the source audio is adjusted to the audio playing rate of the audio to be added, so that the audio playing rate of the source audio is equal to the audio playing rate of the audio to be added. The audio playing speed of the audio to be added can be adjusted to the audio playing speed of the source audio, so that the audio playing speed of the audio to be added is equal to the audio playing speed of the source audio.
S1203: and carrying out fusion processing on the adjusted source audio and the adjusted audio to be added.
In the embodiment of the disclosure, the adjusted source audio and the adjusted audio to be added are subjected to fusion processing, so as to obtain the target audio.
The second mode is as follows: and adjusting the beat information.
S1301: beat information of source audio and beat information of audio to be added are extracted.
In the embodiment of the present disclosure, the beat information of the source audio is extracted to determine the beat information of the source audio, and meanwhile, the beat information of the audio to be added is extracted to determine the beat information of the audio to be added.
S1302: and adjusting the beat information of the source audio and the beat information of the audio to be added.
In the embodiment of the disclosure, after determining the beat information of the source audio and the beat information of the audio to be added, the beat information of the source audio and the beat information of the audio to be added are adjusted, so that the beat information of the source audio is the same as the beat information of the audio to be added.
When the beat information of the source audio and the beat information of the audio to be added are adjusted, the beat information of the source audio can be adjusted, and the beat information of the source audio is adjusted to the beat information of the audio to be added, so that the beat information of the source audio is equal to the beat information of the audio to be added. And adjusting the beat information of the audio to be added to the beat information of the source audio so that the beat information of the audio to be added is equal to the beat information of the source audio.
For example, referring to fig. 4, which is an interface schematic diagram of beat information synchronization in the embodiment of the present disclosure, after a user selects a corresponding audio to be added in an audio playing interface, the selected audio to be added is displayed at a right disc position in the audio playing interface, at this time, an operation control for processing the audio to be added is additionally displayed in the audio playing interface, a "sync" operation control is displayed below a left disc position, and a "sync" operation control is also displayed below a right disc position, when the user clicks the "sync" operation control displayed below the left disc position, the beat information and the BPM of the audio to be added can be synchronized into the beat information and the BPM of the source audio by one key, when the user clicks the "sync" operation control displayed below the right disc position, the beat information and the BPM of the source audio can be synchronized into the beat information and the BPM of the audio to be added by one key, so that the beat information of the source audio and the audio to be added is the same as the BPM.
S1303: and carrying out fusion processing on the adjusted source audio and the adjusted audio to be added.
In the embodiment of the disclosure, the adjusted source audio and the adjusted audio to be added are subjected to fusion processing, so as to obtain the target audio.
Further, in order to enrich the functions that the user can use in the process of making a dish, the user can also add pad sampling audio in the pre-processed audio obtained after fusion processing in the process of making a dish, so that the following steps of performing pad sampling audio in the embodiment of the present disclosure are explained in detail, specifically including:
s1401: a pad sample audio is acquired.
In the embodiment of the disclosure, firstly, the pad is started through the pad button displayed on the audio playing interface, and the pad sampling audio is obtained through the displayed pad.
The pad sample audio may be selected by the user, or may be matched in the background, or may be recorded by the user, and the pad sample audio recorded by the user may be, for example, 10 seconds.
It should be noted that after the percussion pad is opened, the percussion pad sampling audio conforming to the song music information of the source audio can be recommended according to the song music information of the source audio, and is displayed in the audio playing interface in the form of a percussion pad selection control, so that a user can select the corresponding percussion pad sampling audio by clicking different percussion pad selection controls, and the first terminal can acquire the percussion pad sampling audio selected by the user.
For example, referring to fig. 5a, as an interface schematic diagram for displaying a pad in the embodiment of the present disclosure, a user operates a control by clicking a "pad button" displayed on an audio playing interface to open a pad, and then, according to song music wind information of a source audio, a pad sample audio conforming to the song music wind information of the source audio is recommended and displayed in the audio playing interface in the form of a pad selection control, eight squares in fig. 5a are pad selection controls, and each pad selection control corresponds to one pad sample audio.
Further, before the pad selection control is displayed, the pad sampling audio needs to be selected first, for example, as shown in fig. 5b, which is an interface schematic diagram of sound effect selection in the embodiment of the present disclosure, a user selects a corresponding sound effect through a sound effect option displayed in an audio playing interface, so as to obtain a corresponding pad sampling audio set according to the selected sound effect, where the sound effect option includes a sound effect, a voice, a synthesizer, a self-definition, and the like.
For example, when the sound effect selected by the user is self-defined, the user may trigger the first terminal to start recording the self-defined pad sample audio by clicking a "self-defined" option displayed in the audio playing interface. Referring to fig. 6a, which is an interface schematic diagram of starting recording of a percussion pad sampled audio in the embodiment of the present disclosure, in an audio playing interface, a start recording time of the percussion pad sampled audio is displayed, and the percussion pad sampled audio is acquired by a microphone, referring to fig. 6b, which is an interface schematic diagram in the recording in the embodiment of the present disclosure, frequency information of the acquired percussion pad sampled audio and a recording time of the recording of the percussion pad sampled audio are displayed in the audio playing interface, at this time, the recording time is 3.34 seconds, referring to fig. 6c, which is an interface schematic diagram of completing recording in the embodiment of the present disclosure, a time of completing recording of the percussion pad sampled audio is displayed in the audio playing interface, at this time, the time, which is 8.34 seconds, referring to fig. 6d, which is an interface schematic diagram of recording a next percussion pad sampled audio in the embodiment of the present disclosure, at this time, sound effect 2 is displayed in the audio playing interface, and the recording time is 0.00 second.
S1402: the pad sample audio is added to the preprocessed audio obtained after the fusion process.
In the embodiment of the present disclosure, the preprocessed audio is obtained after the fusion processing, and the obtained pad sample audio is added to the preprocessed audio.
Furthermore, in the embodiment of the present disclosure, a touch manner of the user may also be obtained, and the playing frequency of the sampling audio of the pad may be controlled according to the touch manner.
For example, when the touch manner of the user is single-click, the playing times of the audio played by the pad are controlled to be single, and for example, when the touch manner of the user is long-press, the playing times of the audio sampled by the pad are controlled to be multiple.
When the touch mode of the user is long time, the playing times of the sampling audio of the pad can be randomly set, and certainly, a selection time popup window can be triggered and generated to enable the user to select the playing times of the sampling audio of the pad.
Further, in order to improve the experience of the user, the user can perform collaborative creation in multiple terminals, and therefore, the pad in the first terminal can be separated into the second terminal, and then the pad sampling audio is acquired from the second terminal, and the following describes in detail the step of acquiring the pad audio from the second terminal in the embodiment of the present disclosure, and specifically includes:
s1501: a connection is established with the second terminal.
In the embodiment of the disclosure, the connection with the second terminal is established, and the connection with the second terminal is maintained.
When the connection with the second terminal is established, the established connection may be a wireless connection, for example, the wireless connection may be established through bluetooth, or the wireless connection may also be established through WiFi.
Further, a wired connection with the second terminal may also be established, which is not limited in the embodiment of the present disclosure.
S1502: the pad sample audio is acquired from the second terminal through the connection.
In the embodiment of the disclosure, after the second terminal acquires the pad sample audio, the pad sample audio is sent to the first terminal through connection, so that the first terminal acquires the pad sample audio from the second terminal and adds the acquired pad audio to the preprocessed audio.
For example, referring to fig. 7, which is an interface schematic diagram of the second terminal in the embodiment of the present disclosure, a user clicks a pad selection control in the second terminal to obtain a pad sample audio, and sends the pad sample audio to the first terminal through a connection.
Further, in the embodiment of the present disclosure, the preprocessed audio obtained after the fusion processing may also be sent to another terminal, so that the other terminal processes the preprocessed audio, and the following describes in detail the step of processing the preprocessed audio by the other terminal in the embodiment of the present disclosure, specifically including:
s1601: and sending the pre-processed audio after the fusion processing to other terminals.
In the embodiment of the disclosure, after the source audio and the audio to be added are subjected to fusion processing, the preprocessed audio is obtained, the first terminal establishes connection with other terminals, and sends the preprocessed audio to other terminals through the established connection, so that the other terminals can receive the preprocessed audio generated in the first terminal.
S1602: and receiving the target audio sent by other terminals.
The target audio is obtained by processing the pre-processed audio in a preset processing mode after other terminals receive the pre-processed audio.
In the embodiment of the disclosure, after receiving the pre-processed audio, the other terminals process the pre-processed audio in a preset processing mode to obtain the target video, and then send the target video to the first terminal through the wireless connection, so that the first terminal receives the target video sent by the other terminals.
Wherein, the preset processing mode comprises at least one of the following modes: sound effect adjustment, circulation, equalizer, and play start point setting, which are not limited to the above processing methods.
Wherein, the sound effect adjustment comprises at least one of the following: reverberation, echo, delay.
The looping characterization automatically loops audio paragraphs according to the identified beat information.
The equalizer characterizes the volume balance for high, intermediate and low frequencies.
The start-play point setting representation is to preset a start-play point from the automatically identified refrains, and play audio, the number of the start-play points may be one or more, and the limitation is not limited.
For example, referring to fig. 8, which is an interface schematic diagram of sound effect processing in the embodiment of the present disclosure, a reverberation operation control, a loop operation control, and an equalizer operation control are displayed in an operation interface of another terminal, and a user may click the reverberation operation control, the loop operation control, and the equalizer operation control to process a reverberation effect, a loop effect, and an equalization effect of a preprocessed audio, so as to obtain a target audio, and send the target audio to a first terminal.
S1603: and playing the target audio.
In the embodiment of the disclosure, after the first terminal receives the target audio, the target audio is played.
Further, in the embodiment of the present disclosure, an interface recording may be performed on the audio playing interface to obtain an interface recorded video, and the following describes in detail the step of recording the audio playing interface in the embodiment of the present disclosure, specifically including:
s1701: and detecting a recording instruction, and carrying out interface recording on the audio playing interface to obtain an interface recording video.
In the embodiment of the disclosure, after the user triggers recording, interface recording is performed on the audio playing interface, so that an interface recorded video is obtained.
For example, referring to fig. 9a, which is an interface schematic diagram of an audio playing interface in an embodiment of the present disclosure, a user may perform interface recording on the audio playing interface by clicking a recording button in the audio playing interface, so as to obtain an interface recorded video.
It should be noted that, when recording the interface, only the screen may be recorded when the source audio and the audio to be added are fused, or the screen may be recorded when the audio to be added is obtained, which is not limited in the embodiment of the present disclosure.
Further, after obtaining the interface recorded video, the user may click the recording button again, so as to generate the interface recorded video, and guide the Mlog in the distribution station or store the Mlog locally, which is shown in fig. 9b and is an interface schematic diagram for storing the distributed video in the embodiment of the present disclosure.
S1702: the front video is acquired and obtained through the first image acquisition equipment, and/or the rear video is acquired and obtained through the second image acquisition equipment.
In the embodiment of the present disclosure, step S1702 may be divided into the following three execution modes:
the first mode is as follows: a front video is obtained.
The method specifically comprises the following steps: and acquiring and obtaining a front video through first image acquisition equipment.
In the embodiment of the disclosure, when a user triggers and generates a recording instruction, a front video is acquired and obtained through the first image acquisition device.
The first image capturing device may be, for example, a front camera, which is not limited in the embodiments of the present disclosure.
The second mode is as follows: a post-positioned video is obtained.
The method specifically comprises the following steps: and acquiring and obtaining a post video through a second image acquisition device.
In the embodiment of the disclosure, when a user triggers and generates a recording instruction, a second image acquisition device acquires and obtains a post-positioned video.
The second image capturing device may be, for example, a rear camera, which is not limited in the embodiment of the present disclosure.
The third mode is as follows: a front video and a rear video are obtained.
The method specifically comprises the following steps: the front video is acquired through the first image acquisition equipment, and the rear video is acquired through the second image acquisition equipment.
In the embodiment of the disclosure, when a user triggers and generates a recording instruction, the first image acquisition device and the second image acquisition device are started simultaneously, the front-end video is acquired through the first image acquisition device, and the rear-end video is acquired through the second image acquisition device.
For example, referring to fig. 10a, which is an interface schematic diagram of turning on a camera in the embodiment of the present disclosure, when a user clicks a "record" button in an audio playing interface, the user is asked in the audio playing interface whether to turn on a front camera and a rear camera during recording, when the user turns on a front camera by selecting, the front camera of the current first terminal is turned on, when the user selects to turn on a rear camera, and when the user selects to turn on the front camera and the rear camera at the same time.
Further, the user may also connect to other terminals and turn on the front and rear cameras of the other terminals, for example, as shown in fig. 10a, when the user selects to connect a new device, connect to the other terminals and turn on the front and rear cameras of the other terminals.
It should be noted that the second terminal and other terminals may also apply for home screen control, and the operations of the three terminals may be interchanged.
S1703: and simultaneously displaying the interface recorded video, the front video and/or the rear video on the audio playing interface.
In the embodiment of the disclosure, after the interface recording video, the front video and the rear video are obtained, the interface recording video, the front video and the rear video are displayed on the audio playing interface at the same time.
Further, in the embodiment of the present disclosure, corresponding playing operations may be further performed on the interface recorded video, the front video, and the rear video, respectively, specifically including:
and executing corresponding playing operation on the interface recorded video, the front-end video and/or the rear-end video.
Wherein, the playing operation at least comprises any one of the following operations: switching, dragging, amplifying and stacking between pictures.
For example, when the acquired video is a front-facing video, a rear-facing video, and an interface recorded video, as shown in fig. 10b, the interface schematic diagram of the current terminal camera in the embodiment of the present disclosure is shown, the front-facing video acquired by the front-facing camera is on the left side of the audio playing interface, the video is recorded in the middle of the front-facing camera, and the rear-facing video acquired by the rear-facing camera is on the right side of the audio playing interface, at this time, the pictures of the videos acquired by the front and rear cameras of the current first terminal are maximized.
For another example, when the acquired video is a front-facing video, a video recorded by another terminal, and an interface, as shown in fig. 10c, the interface schematic diagram of video playing by another terminal in the embodiment of the present disclosure is shown, the front-facing video acquired by the front-facing camera of the current first terminal is on the left side of the audio playing interface, the video recorded by the middle interface is acquired, and the video acquired by another terminal is on the right side of the audio playing interface, at this time, the pictures of the front-facing video of the current first terminal and the video acquired by another terminal are maximized.
For example, the front video and the back video can be minimized, and refer to fig. 10d, which is an interface diagram of image minimization in the embodiment of the present disclosure, in which the front video and the back video are minimized and displayed at the upper right of the audio playing interface.
For example, the front video and the back video can also be overlaid, and refer to fig. 10e, which is a schematic diagram of an interface with multiple frames overlaid in an embodiment of the present disclosure, in which the front video and the back video are overlaid and displayed on the upper right of the audio playing interface.
Further, after the target audio is generated, the user can store the generated target audio in the cloud, so that other users can directly perform operations such as increasing, decreasing, adjusting and replacing on the basis of the target audio generated by the user, new target audio is generated, and creation is completed jointly.
Further, in the embodiment of the present disclosure, a cover may be added to the obtained target audio, specifically including:
and displaying a cover image corresponding to the target audio obtained after the fusion processing on an audio playing interface.
In the embodiment of the disclosure, a user uploads a target audio to a cloud for auditing, the target audio can be uploaded in a management background after the target audio is approved, the target audio is issued to a station, a cover image is generated for the target audio, and the cover image corresponding to the target audio is displayed in an audio playing interface.
When the cover image is generated, the cover image can be uploaded by a user, and the cover image can be generated by automatically matching according to the feature information of the target audio, which is not limited in the embodiment of the disclosure.
Further, in the embodiment of the present disclosure, when generating a target audio, a special effect video may be added to the target audio, which specifically includes:
and displaying the special effect video corresponding to the target audio obtained after the fusion processing on an audio playing interface.
In the embodiment of the disclosure, in the process of generating the target audio, a user can trigger to obtain a special-effect video by clicking an operation control in an audio playing interface, and the special-effect video corresponding to the target audio is displayed in the audio playing interface.
Of course, when the audio to be added or the pad sampled audio is obtained, the special effect video can be triggered and generated according to the audio to be added or the pad sampled audio.
For example, when creating, the percussion pad of the coke is acquired to sample the audio, the official advertisement egg of the coke in the station is triggered, and the effect that the coke is filled in the screen is displayed in the audio playing interface.
Further, in the embodiment of the present disclosure, after receiving the interface conversion instruction, the disc-playing mode may be switched to the first mode, and the following describes in detail the step of switching the disc-playing mode to the first mode in the embodiment of the present disclosure, specifically including:
and when an interface conversion instruction is detected, controlling the target object in the audio playing interface to return to the original position.
And the original position represents the position information of the target object when the first terminal is in the first mode.
Specifically, after the video to be added is acquired, or after the audio to be added and the source audio in the first mode are subjected to fusion processing, whether an interface conversion instruction is acquired or not can be detected in real time, and if the interface conversion instruction is determined to be detected, the target object in the audio playing interface is controlled to return to the original position.
When the interface conversion instruction is detected, the confirmation popup window can be displayed in the audio playing interface so that the user can confirm, and the interface conversion instruction can be obtained.
For example, when an interface switching instruction is detected, the target object is linked back to the original position through a preset animation.
In the embodiment of the disclosure, the operation and the function of real disk playing can be combined, secondary creation is performed based on the currently played scene, user experience can be improved, the sampling audio of the pad is recommended based on the source audio, in the disk playing process, automatic beat calibration of the source audio and the audio to be added is supported, and the processing efficiency of the audio can be improved.
Based on the foregoing embodiment, the following describes an audio processing method in an embodiment of the present disclosure in detail by using a specific example, and referring to fig. 11, which is another flowchart of an audio processing method in an embodiment of the present disclosure, specifically including:
step 1100: and when the deflection angle of the first terminal is determined to be 45 degrees, controlling a black rubber stylus in the audio playing interface to lift.
Step 1101: and when the deflection angle of the first terminal is determined to exceed 45 degrees, hiding an operation control in the audio playing interface, and moving the vinyl record to a preset left disc position.
In the disclosed embodiment, the vinyl record increases with the angle of deflection while increasing the image area of the vinyl record during the movement of the vinyl record to the left disc position.
Step 1102: and when the deflection angle of the first terminal is determined to be 90 degrees, controlling the audio playing interface to be switched from the first mode to the disc playing mode.
In the embodiment of the disclosure, when the audio playing interface is switched to the disc playing mode, each operation control in the audio playing interface is already converted into a hidden state, and the vinyl record is located at the left disc position of the audio playing interface, and the right disc position is used for adding audio to be added.
Note that the vinyl record at the position of the left disc is the song currently being played.
Step 1103: and taking the audio recommended by the system as the audio to be added.
In the embodiment of the disclosure, the audio recommended by the system can be used as the audio to be added, and the audio selected from the recommendation list of the audio playing interface can be used as the audio to be added.
Step 1104: and adjusting the beat information of the source audio into the beat information of the audio to be added.
In the embodiment of the disclosure, the beat information of the source audio and the beat information of the audio to be added are extracted, and the beat information of the source audio is adjusted to the beat information of the audio to be added, so that the beat information of the source audio is the same as the beat information of the audio to be added.
Step 1105: and carrying out fusion processing on the adjusted source audio and the adjusted audio to be added.
Step 1106: and adding the impact pad sampling audio to the pre-processing audio obtained after the fusion processing to obtain the target audio.
Specifically, pad sample audio is acquired, and the pad sample audio is added to the preprocessed audio obtained after the fusion process, thereby obtaining the target video.
In the embodiment of the disclosure, when the gesture of the first terminal is detected to meet the preset condition, the disc playing mode is started, and therefore, when the first mode is switched to the disc playing mode, the playing is not interrupted, so that the playing continuity can be ensured, and the experience of a user is improved.
Based on the same inventive concept, an embodiment of the present disclosure further provides an audio processing apparatus, where the audio processing apparatus may be a hardware structure, a software module, or a hardware structure plus a software module, and the embodiment of the audio processing apparatus may inherit the content described in the foregoing method embodiment. Based on the above embodiments, referring to fig. 12, a schematic structural diagram of an audio processing apparatus in an embodiment of the present disclosure is shown, which specifically includes:
the first switching module 1200 is configured to switch an audio playing interface where the first terminal is located from a first mode to a disc playing mode when detecting that the gesture of the first terminal meets a preset condition;
a determining module 1201, configured to determine, when the audio playing interface is in a disc playing mode, an audio to be added;
a first processing module 1202, configured to perform fusion processing on the audio to be added and the source audio in the first mode.
Optionally, the first switching module 1200 is specifically configured to:
obtaining a deflection angle;
and if the deflection angle meets the preset condition, switching the audio playing interface from a first mode to a disc playing mode.
Optionally, if it is determined that the deflection angle satisfies the preset condition, when the audio playing interface is switched from the first mode to the disc playing mode, the first switching module 1200 is specifically configured to:
and if the deflection angle reaches the first angle threshold value, controlling a target object in the audio playing interface to move towards a preset direction until the target object is moved to a preset target position, and switching the audio playing interface from a first mode to a disc playing mode.
Optionally, the first switching module 1200 is further configured to:
and in the process that the target object moves to the target position, hiding an operation control in the audio playing interface and/or increasing the image area of the target object.
Optionally, when determining that an audio is to be added, the determining module 1201 is specifically configured to:
taking the audio recommended by the system as the audio to be added; or the like, or, alternatively,
and taking the audio selected from the audio playing interface as the audio to be added.
Optionally, the first processing module 1202 is specifically configured to:
determining the audio playing speed of the source audio and the audio playing speed of the audio to be added;
adjusting the audio playing speed of the source audio and the audio playing speed of the audio to be added;
and carrying out fusion processing on the adjusted source audio and the adjusted audio to be added.
Optionally, the first processing module 1202 is specifically configured to:
extracting the beat information of the source audio and the beat information of the audio to be added;
adjusting the beat information of the source audio and the beat information of the audio to be added;
and carrying out fusion processing on the adjusted source audio and the adjusted audio to be added.
Optionally, after the audio to be added and the source audio in the first mode are fused, the method further includes:
an obtaining module 1203, configured to obtain a pad sampling audio;
an adding module 1204, configured to add the pad sample audio to the preprocessed audio obtained after the fusion processing.
Optionally, the obtaining module 1203 is specifically configured to:
establishing a connection with a second terminal;
pad sample audio is acquired from the second terminal through the connection.
Optionally, after the audio to be added and the source audio in the first mode are fused, the method further includes:
a sending module 1205 is configured to send the preprocessed audio after the fusion processing to other terminals;
a receiving module 1206, configured to receive a target audio sent by the other terminal, where the target audio is obtained after the other terminal receives the preprocessed audio and processes the preprocessed audio in a preset processing manner;
an audio playing module 1207, configured to play the target audio.
Optionally, the apparatus further comprises:
the detection module 1208 is configured to detect a recording instruction, perform interface recording on the audio playing interface, and obtain an interface recorded video;
the acquisition module 1209 is configured to acquire a front-end video through a first image acquisition device and/or acquire a rear-end video through a second image acquisition device;
the first display module 1210 is configured to display an interface recorded video, the front video, and/or the rear video simultaneously on the audio playing interface.
Optionally, the apparatus further comprises:
a second processing module 1211, configured to perform a corresponding play operation on the interface recorded video, the front-end video, and/or the back-end video, where the play operation at least includes any one of: switching, dragging, amplifying and stacking between pictures.
Optionally, the apparatus further comprises:
and the second display module 1212 is configured to display, on the audio playing interface, a cover image corresponding to the target audio obtained after the fusion processing.
Optionally, the apparatus further comprises:
and a third display module 1213, configured to display, on the audio playing interface, a special effect video corresponding to the target audio obtained after the fusion processing.
Optionally, after the audio playing interface where the first terminal is located is switched from the first mode to the disc playing mode, the method further includes:
a second switching module 1214, configured to switch from the disc playing mode to the first mode if it is determined that no disc playing processing instruction is detected within a preset time period and/or the posture of the first terminal meets a preset posture condition.
Optionally, after the audio to be added and the source audio in the first mode are fused, the method further includes:
the control module 1215 is configured to control the target object in the audio playing interface to return to an original position when the interface conversion instruction is detected, where the original position represents the position information of the target object when the first terminal is in the first mode.
Based on the above embodiments, fig. 13 is a schematic structural diagram of an electronic device in an embodiment of the disclosure.
The present disclosure provides an electronic device, which may include a processor 1310 (CPU), a memory 1320, an input device 1330, an output device 1340, and the like, wherein the input device 1330 may include a keyboard, a mouse, a touch screen, and the like, and the output device 1340 may include a Display device, such as a Liquid Crystal Display (LCD), a Cathode Ray Tube (CRT), and the like.
Memory 1320 may include Read Only Memory (ROM) and Random Access Memory (RAM), and provides the processor 1310 with program instructions and data stored in the memory 1320. In the disclosed embodiment, the memory 1320 may be used to store a program of any one of the audio processing methods in the disclosed embodiment.
The processor 1310 is configured to execute any one of the audio processing methods according to the disclosed embodiments by calling the program instructions stored in the memory 1320.
Based on the above embodiments, in the embodiments of the present disclosure, there is provided a computer-readable storage medium on which a computer program is stored, the computer program, when executed by a processor, implementing the audio processing method in any of the above method embodiments.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the present disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications can be made in the present disclosure without departing from the spirit and scope of the disclosure. Thus, if such modifications and variations of the present disclosure fall within the scope of the claims of the present disclosure and their equivalents, the present disclosure is intended to include such modifications and variations as well.

Claims (10)

1. An audio processing method, comprising:
when the gesture of the first terminal is detected to meet a preset condition, switching an audio playing interface where the first terminal is located from a first mode to a disc playing mode;
when the audio playing interface is in a disc playing mode, determining audio to be added;
and fusing the audio to be added with the source audio in the first mode.
2. The method according to claim 1, wherein when detecting that the gesture of the first terminal meets a preset condition, switching the audio playing interface where the first terminal is located from the first mode to a disc playing mode specifically comprises:
obtaining a deflection angle;
and if the deflection angle meets the preset condition, switching the audio playing interface from a first mode to a disc playing mode.
3. The method of claim 2, wherein if it is determined that the deflection angle satisfies a predetermined condition, switching the audio playback interface from the first mode to the disc playing mode includes:
and if the deflection angle reaches the first angle threshold value, controlling a target object in the audio playing interface to move towards a preset direction until the target object is moved to a preset target position, and switching the audio playing interface from a first mode to a disc playing mode.
4. The method of claim 3, wherein the method further comprises:
and in the process that the target object moves to the target position, hiding an operation control in the audio playing interface and/or increasing the image area of the target object.
5. The method of claim 1, wherein determining the audio to be added specifically comprises:
taking the audio recommended by the system as the audio to be added; or the like, or, alternatively,
and taking the audio selected from the audio playing interface as the audio to be added.
6. The method according to claim 1, wherein the merging the audio to be added with the source audio in the first mode includes:
determining the audio playing speed of the source audio and the audio playing speed of the audio to be added;
adjusting the audio playing speed of the source audio and the audio playing speed of the audio to be added;
and carrying out fusion processing on the adjusted source audio and the adjusted audio to be added.
7. The method of claim 1, wherein after the merging the audio to be added with the source audio in the first mode, further comprising:
acquiring a percussion pad sampling audio;
and adding the pad sampling audio to the pre-processing audio obtained after the fusion processing.
8. An audio processing apparatus, comprising:
the first switching module is used for switching the audio playing interface where the first terminal is located from a first mode to a disc playing mode when detecting that the posture of the first terminal meets a preset condition;
the determining module is used for determining audio to be added when the audio playing interface is in a disc playing mode;
and the first processing module is used for fusing the audio to be added with the source audio in the first mode.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the steps of the method of any of claims 1-7 are implemented when the program is executed by the processor.
10. A computer-readable storage medium having stored thereon a computer program, characterized in that: the computer program when executed by a processor implements the steps of the method of any one of claims 1 to 7.
CN202110782371.XA 2021-07-12 2021-07-12 Audio processing method and device Active CN113590076B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110782371.XA CN113590076B (en) 2021-07-12 2021-07-12 Audio processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110782371.XA CN113590076B (en) 2021-07-12 2021-07-12 Audio processing method and device

Publications (2)

Publication Number Publication Date
CN113590076A true CN113590076A (en) 2021-11-02
CN113590076B CN113590076B (en) 2024-03-29

Family

ID=78246756

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110782371.XA Active CN113590076B (en) 2021-07-12 2021-07-12 Audio processing method and device

Country Status (1)

Country Link
CN (1) CN113590076B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1643570A (en) * 2002-03-28 2005-07-20 皇家飞利浦电子股份有限公司 Media player with 'DJ' mode
US20100318204A1 (en) * 2009-06-16 2010-12-16 Kyran Daisy Virtual phonograph
CN103440330A (en) * 2013-09-03 2013-12-11 网易(杭州)网络有限公司 Music program information acquisition method and equipment
WO2015114216A2 (en) * 2014-01-31 2015-08-06 Nokia Corporation Audio signal analysis
CN105959792A (en) * 2016-04-28 2016-09-21 宇龙计算机通信科技(深圳)有限公司 Playing control method, device and system
CN107111642A (en) * 2014-12-31 2017-08-29 Pcms控股公司 For creating the system and method for listening to daily record and music libraries
CN108780653A (en) * 2015-10-27 2018-11-09 扎克·J·沙隆 Audio content makes, the system and method for Audio Sorting and audio mix
CN109147745A (en) * 2018-07-25 2019-01-04 北京达佳互联信息技术有限公司 Song editing and processing method, apparatus, electronic equipment and storage medium
CN109587549A (en) * 2018-12-05 2019-04-05 广州酷狗计算机科技有限公司 Video recording method, device, terminal and storage medium
CN110225382A (en) * 2019-05-27 2019-09-10 上海天怀信息科技有限公司 Audio-video-interactive integration operating software based on split screen control technology
WO2020077855A1 (en) * 2018-10-19 2020-04-23 北京微播视界科技有限公司 Video photographing method and apparatus, electronic device and computer readable storage medium
CN111899706A (en) * 2020-07-30 2020-11-06 广州酷狗计算机科技有限公司 Audio production method, device, equipment and storage medium
CN112037737A (en) * 2020-07-07 2020-12-04 声音启蒙科技(深圳)有限公司 Audio playing method and playing system
CN112885318A (en) * 2019-11-29 2021-06-01 阿里巴巴集团控股有限公司 Multimedia data generation method and device, electronic equipment and computer storage medium

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1643570A (en) * 2002-03-28 2005-07-20 皇家飞利浦电子股份有限公司 Media player with 'DJ' mode
US20100318204A1 (en) * 2009-06-16 2010-12-16 Kyran Daisy Virtual phonograph
CN103440330A (en) * 2013-09-03 2013-12-11 网易(杭州)网络有限公司 Music program information acquisition method and equipment
WO2015114216A2 (en) * 2014-01-31 2015-08-06 Nokia Corporation Audio signal analysis
CN107111642A (en) * 2014-12-31 2017-08-29 Pcms控股公司 For creating the system and method for listening to daily record and music libraries
CN108780653A (en) * 2015-10-27 2018-11-09 扎克·J·沙隆 Audio content makes, the system and method for Audio Sorting and audio mix
CN105959792A (en) * 2016-04-28 2016-09-21 宇龙计算机通信科技(深圳)有限公司 Playing control method, device and system
CN109147745A (en) * 2018-07-25 2019-01-04 北京达佳互联信息技术有限公司 Song editing and processing method, apparatus, electronic equipment and storage medium
WO2020077855A1 (en) * 2018-10-19 2020-04-23 北京微播视界科技有限公司 Video photographing method and apparatus, electronic device and computer readable storage medium
CN109587549A (en) * 2018-12-05 2019-04-05 广州酷狗计算机科技有限公司 Video recording method, device, terminal and storage medium
CN110225382A (en) * 2019-05-27 2019-09-10 上海天怀信息科技有限公司 Audio-video-interactive integration operating software based on split screen control technology
CN112885318A (en) * 2019-11-29 2021-06-01 阿里巴巴集团控股有限公司 Multimedia data generation method and device, electronic equipment and computer storage medium
CN112037737A (en) * 2020-07-07 2020-12-04 声音启蒙科技(深圳)有限公司 Audio playing method and playing system
CN111899706A (en) * 2020-07-30 2020-11-06 广州酷狗计算机科技有限公司 Audio production method, device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
曹偲 等: "基于频域稀疏自编码网络的音乐分离技术", 电声技术, vol. 44, no. 6, 31 December 2020 (2020-12-31), pages 91 - 94 *
赵江涛;: "我的音乐 我做主 ROLI BLOCKS体验评测", 消费电子, no. 03, 5 March 2017 (2017-03-05), pages 68 - 71 *

Also Published As

Publication number Publication date
CN113590076B (en) 2024-03-29

Similar Documents

Publication Publication Date Title
US11030987B2 (en) Method for selecting background music and capturing video, device, terminal apparatus, and medium
CN109819313B (en) Video processing method, device and storage medium
CN104836889B (en) Mobile terminal and its control method
WO2016177296A1 (en) Video generation method and apparatus
CN104104986B (en) The synchronous method and device of audio and captions
CN113596552B (en) Display device and information display method
CN111163274B (en) Video recording method and display equipment
WO2017193540A1 (en) Method, device and system for playing overlay comment
CN104104990B (en) Adjust the method and device of subtitle in video
US20100134411A1 (en) Information processing apparatus and information processing method
CN1856065B (en) Video processing apparatus
WO2017014800A1 (en) Video editing on mobile platform
US20220057984A1 (en) Music playing method, device, terminal and storage medium
CN109348239A (en) Piece stage treatment method, device, electronic equipment and storage medium is broadcast live
EP2665290A1 (en) Simultaneous display of a reference video and the corresponding video capturing the viewer/sportsperson in front of said video display
CN112445395A (en) Music fragment selection method, device, equipment and storage medium
CN102208205A (en) Video/Audio Player
CN106921883A (en) A kind of method and device of video playback treatment
US20220078221A1 (en) Interactive method and apparatus for multimedia service
JP2012034234A (en) Video reproduction device and video reproduction method
CN111107412A (en) Media playing progress synchronization method and device and storage medium
CN104104987B (en) Picture and synchronous sound method and device in video playing
CN112788422A (en) Display device
CN113590076B (en) Audio processing method and device
CN112055234A (en) Television equipment screen projection processing method, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant