WO2024082802A1 - 音频处理方法、装置及终端设备 - Google Patents

音频处理方法、装置及终端设备 Download PDF

Info

Publication number
WO2024082802A1
WO2024082802A1 PCT/CN2023/113811 CN2023113811W WO2024082802A1 WO 2024082802 A1 WO2024082802 A1 WO 2024082802A1 CN 2023113811 W CN2023113811 W CN 2023113811W WO 2024082802 A1 WO2024082802 A1 WO 2024082802A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
accompaniment
lyrics
response
terminal device
Prior art date
Application number
PCT/CN2023/113811
Other languages
English (en)
French (fr)
Inventor
汉特拉库尔拉姆撒恩
孟文翰
李佩道
李岩冰
李星毅
Original Assignee
抖音视界有限公司
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 抖音视界有限公司, 北京字跳网络技术有限公司 filed Critical 抖音视界有限公司
Publication of WO2024082802A1 publication Critical patent/WO2024082802A1/zh

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/36Accompaniment arrangements

Definitions

  • the embodiments of the present disclosure relate to the field of audio processing technology, and in particular, to an audio processing method, apparatus, and terminal device.
  • Music creators can use music applications to create music. For example, music creators can use music applications to add audio effects to an audio clip.
  • music creators can add a piece of music arrangement to music applications, and add related sound effects, lyrics and other phonemes to the arrangement through music applications.
  • the creation of music arrangement and lyrics is difficult, the existing audio editing functions are limited, and the requirements for music creators are high. Music creators cannot simply create music, and the efficiency of music creation is low.
  • the present invention provides an audio processing method, an apparatus and a terminal device, which are used to solve the technical problem of low efficiency of music creation in the prior art.
  • the present disclosure provides an audio processing method, the audio processing method comprising:
  • the first page comprising a first area and a second area, the first area being associated with audio editing, and the second area being associated with text editing;
  • a first accompaniment area is displayed in the first area, and a first lyrics area is displayed in the second area.
  • the present disclosure provides an audio processing device, the audio processing device comprising a display module and a response module, wherein:
  • the display module is used to display a first page, the first page includes a first area and a second area, the first area is associated with audio editing, and the second area is associated with text editing;
  • the response module is used for displaying a first accompaniment area in the first area and a first lyrics area in the second area in response to an editing operation on the first area or the second area.
  • an embodiment of the present disclosure provides a terminal device, including: a processor and a memory;
  • the memory stores computer-executable instructions
  • the processor executes the computer-executable instructions stored in the memory, so that the at least one processor executes the audio processing method as described in the first aspect and various possible aspects of the first aspect.
  • an embodiment of the present disclosure provides a computer-readable storage medium, in which computer execution instructions are stored.
  • a processor executes the computer execution instructions, the audio processing method as described in the first aspect and various possible aspects of the first aspect are implemented.
  • an embodiment of the present disclosure provides a computer program product, including a computer program, which, when executed by a processor, implements the audio processing method as described in the first aspect and various possible aspects of the first aspect.
  • an embodiment of the present disclosure provides a computer program, which, when executed by a processor, implements the audio processing method as described in the first aspect and various possible aspects of the first aspect.
  • FIG1 is a schematic diagram of an application scenario provided by an embodiment of the present disclosure.
  • FIG2 is a schematic flow chart of an audio processing method provided by an embodiment of the present disclosure.
  • FIG3 is a schematic diagram of a process of displaying a first page provided by an embodiment of the present disclosure
  • FIG4 is a schematic diagram of a process of displaying a first accompaniment area and a first lyrics area according to an embodiment of the present disclosure
  • FIG5 is a schematic diagram showing a first lyrics area and a first accompaniment area provided by an embodiment of the present disclosure
  • FIG6A is a schematic diagram of deleting a first lyrics area and a first accompaniment area provided by an embodiment of the present disclosure
  • FIG6B is a schematic diagram of deleting a first accompaniment area and a first lyrics area provided by an embodiment of the present disclosure
  • FIG7 is a schematic diagram showing a first accompaniment area and a first lyrics area provided by an embodiment of the present disclosure
  • FIG8 is a schematic diagram of a process of displaying an accompaniment style window provided by an embodiment of the present disclosure.
  • FIG9 is a schematic diagram of a process for determining a target accompaniment style provided by an embodiment of the present disclosure.
  • FIG10 is a schematic diagram of a process of displaying a first accompaniment area provided by an embodiment of the present disclosure
  • FIG11 is a schematic diagram showing a first lyrics area and a first accompaniment area provided by an embodiment of the present disclosure
  • FIG12 is a schematic diagram of a process of displaying a text title window provided by an embodiment of the present disclosure
  • FIG13 is a schematic diagram of a process of displaying a first lyrics area provided by an embodiment of the present disclosure.
  • FIG14 is a schematic diagram of a process of displaying lyrics provided by an embodiment of the present disclosure.
  • FIG15 is a schematic diagram of a method for displaying a first voice provided by an embodiment of the present disclosure
  • FIG16 is a schematic diagram of a process of displaying a sound effect window provided by an embodiment of the present disclosure
  • FIG17 is a schematic diagram of adding an audio track associated with a second audio track provided by an embodiment of the present disclosure
  • FIG18 is a schematic diagram of the structure of an audio processing device provided by an embodiment of the present disclosure.
  • FIG19 is a schematic diagram of the structure of another audio processing device provided by an embodiment of the present disclosure.
  • FIG. 20 is a schematic diagram of the structure of a terminal device provided in an embodiment of the present disclosure.
  • Terminal device a device with wireless transceiver function. Terminal devices can be deployed on land, including indoors or outdoors, handheld, wearable or vehicle-mounted; they can also be deployed on water (such as ships, etc.).
  • the terminal device can be a mobile phone, a portable Android device (PAD), a computer with wireless transceiver function, a virtual reality (VR) terminal device, an augmented reality (AR) terminal device, a wireless terminal in industrial control, a vehicle-mounted terminal device, a wireless terminal in self driving, a wireless terminal in remote medical, a wireless terminal in smart grid, a wireless terminal in transportation safety, a wireless terminal in smart city, a wireless terminal in smart home, a wearable terminal device, etc.
  • PDA portable Android device
  • VR virtual reality
  • AR augmented reality
  • the terminal device involved in the embodiments of the present disclosure may also be referred to as a terminal, a user Equipment (user equipment, UE), access terminal equipment, vehicle-mounted terminal, industrial control terminal, UE unit, UE station, mobile station, mobile station, remote station, remote terminal equipment, mobile device, UE terminal equipment, wireless communication equipment, UE agent or UE device, etc.
  • the terminal equipment can also be fixed or mobile.
  • Music theory is short for music theory, including basic theories with lower difficulty.
  • music theory can include music score reading, intervals, chords, rhythm, beats, etc.
  • Music theory can also include theories with higher difficulty.
  • music theory can include harmony, polyphony, form, melody, instrumentation, etc.
  • Arrangement is the process of arranging music in combination with music theory. For example, the arranger can write accompaniment and harmony for a musical work according to the main melody (beat) of the music and the style of the work that the creator wants to express (cheerful, rock, etc.).
  • music creators can create an accompaniment and add sound effects, lyrics and other phonemes to the accompaniment through music applications to complete the creation of music.
  • the creation of accompaniment and lyrics is difficult, and music creators need to learn music theory knowledge.
  • the existing music editing functions are limited and the operation is complex. Music creators cannot simply create music, and the efficiency of music creation is low.
  • the embodiment of the present disclosure provides an audio processing method, and the terminal device can display a first area associated with audio editing and a second area associated with text editing.
  • the first accompaniment area is displayed in the first area
  • the first lyrics area corresponding to the first accompaniment area is displayed in the second area
  • the terminal device can display the accompaniment area associated with the text editing operation in the first area.
  • the terminal device can display the lyrics area associated with the audio editing operation in the second area. In this way, when the user performs an editing operation in any area, the terminal device can generate and display the accompaniment area and the lyrics area, thereby reducing the complexity of music creation and improving the efficiency of music creation.
  • FIG1 is a schematic diagram of an application scenario provided by an embodiment of the present disclosure.
  • the display page of the terminal device is a first page, and the first page includes a first area associated with audio editing and a second area associated with text editing. If the terminal device displays the text "Prelude" in the second area, the terminal device can display the accompaniment corresponding to the prelude in the first area. In this way, when the user performs editing operations in any area, the terminal device can display the corresponding content in another area, thereby reducing the complexity of music creation and improving the efficiency of music creation.
  • FIG. 1 is only an illustrative example of an application scenario of an embodiment of the present disclosure, and is not intended to limit the application scenario of the embodiment of the present disclosure.
  • FIG2 is a flow chart of an audio processing method provided by an embodiment of the present disclosure. Referring to FIG2 , the method may include:
  • the execution subject of the embodiment of the present disclosure may be a terminal device, or an audio processing device arranged in the terminal device.
  • the audio processing device may be implemented by software, or the audio processing device may be implemented by a combination of software and hardware.
  • the first page includes a first area and a second area.
  • the first area is associated with audio editing
  • the second area is associated with text editing.
  • audio can be displayed in the first area.
  • the terminal device can display a frequency spectrum corresponding to the accompaniment in the first area, and the terminal device can also display a frequency corresponding to the accompaniment in the first area.
  • text may be displayed in the second area.
  • the terminal device may display a title (such as a prelude, a verse, etc.) in the second area, the terminal device may display lyrics in the second area, or the terminal device may display a title and lyrics in the second area, which is not limited in the embodiments of the present disclosure.
  • the terminal device may display the first page according to the following feasible implementation method: in response to a touch operation on the browser program, display the browser page, enter the first URL associated with the first page in the URL input area of the browser page, and in response to a jump operation to the first URL, display the first page.
  • the terminal device may display a page corresponding to the browser, the browser page includes a URL input area, the user may enter the URL associated with the first page in the URL input area, and click a page jump control, the browser may jump to the first page, and display the first page.
  • FIG3 is a schematic diagram of a process of displaying a first page provided by an embodiment of the present disclosure.
  • the display page of the terminal device includes a browser control.
  • the terminal device displays a browser page, and the browser page includes a URL input area.
  • the browser page can jump to the first page, and the first page includes a first area and a second area.
  • the user can click the display page of the terminal device with a mouse, click the display page by touch, or trigger the display page by voice control, which is not limited to the embodiment of the present disclosure.
  • S202 In response to an editing operation on the first area or the second area, display a first accompaniment area in the first area, and display a first lyrics area in the second area.
  • the first accompaniment area is displayed in the first area
  • the first lyrics area is displayed in the second area.
  • Case 1 In response to a trigger operation on the first area.
  • a first accompaniment area is displayed in the first area, and a first lyrics area corresponding to the first accompaniment area is displayed in the second area.
  • the first lyrics area may include a lyrics paragraph title and text content.
  • the lyrics paragraph title may be the title of the arrangement paragraph
  • the text content may be the lyrics of the arrangement.
  • the lyrics paragraph title may be a title such as "Prelude”, “Verse”, “Chorus” or “Outro”
  • the text content may be text lyrics arbitrarily input by the user or lyrics intelligently recommended by the terminal device.
  • the triggering operation on the first area may include a user's touch operation or voice operation on the first area, which is not limited in the embodiments of the present disclosure.
  • the terminal device may display the first accompaniment area in the first area, and display the first lyrics area corresponding to the first accompaniment area in the second area.
  • the first accompaniment area in the first area may include an accompaniment.
  • the first area may display the first accompaniment area
  • the first accompaniment area may include a note diagram of the accompaniment (displaying the notes of the accompaniment), a spectrum diagram (displaying the amplitude of the accompaniment), etc., which is not limited in the embodiments of the present disclosure.
  • the terminal device can intelligently recommend an accompaniment associated with the first accompaniment area, and the terminal device can also load an external accompaniment, which is not limited in the embodiments of the present disclosure.
  • each first accompaniment area has a corresponding first lyrics area. For example, if the first accompaniment area is a prelude area in the arrangement, the lyrics paragraph title of the first lyrics corresponding to the first accompaniment area is the text "Prelude", and the text content in the first lyrics area is the lyrics of the prelude.
  • FIG4 is a schematic diagram of a process for displaying a first accompaniment area and a first lyrics area provided by an embodiment of the present disclosure.
  • the display page of the terminal device includes a first page, and the first page includes a second area and a first area, and the first area includes an add accompaniment control.
  • the terminal device can generate an accompaniment area for the main song in the first area, and the accompaniment area includes the accompaniment of the main song, and display the lyrics paragraph title "Main Song" in the second area. In this way, the operational complexity of music creation can be reduced and the efficiency of audio creation can be improved.
  • the terminal device can intelligently recommend the accompaniment associated with the first accompaniment area, and display the note diagram of the accompaniment in the first accompaniment area, and display the first lyrics area corresponding to the first accompaniment area in the second area. In this way, the complexity of music creation can be reduced and the efficiency of music creation can be improved.
  • Case 2 In response to a trigger operation on the second area.
  • the first lyrics area is displayed in the second area, and the first accompaniment area corresponding to the first lyrics area is displayed in the first area.
  • the terminal device may display the first lyrics area in the second area, and display the first accompaniment area corresponding to the first lyrics area in the first area.
  • the terminal device displays the area of the prelude lyrics in the second area, the terminal device displays the prelude accompaniment area corresponding to the prelude lyrics area in the first area.
  • the triggering operation on the second area may include a touch operation or a voice operation on the second area by the user, which is not limited in this embodiment of the present disclosure.
  • FIG5 is a schematic diagram of a method for displaying a first lyrics area and a first accompaniment area provided by an embodiment of the present disclosure.
  • the display page of the terminal device includes a first page, and the first page includes a first area and a second area, and the second area includes an add text control.
  • the terminal device can generate a lyrics paragraph title "Main Song" in the second area, and display the accompaniment area of the main song in the first area, and the accompaniment area includes the accompaniment of the main song. In this way, the operational complexity of music creation can be reduced and the efficiency of audio creation can be improved.
  • the terminal device when the user clicks on the second area, the terminal device can display the first lyrics area in the second area, and can display the first accompaniment area corresponding to the first lyrics area in the first area. In this way, the complexity of music creation can be reduced and the efficiency of music creation can be improved.
  • the above-mentioned audio processing method also includes a deletion operation on the first accompaniment area or the first lyrics area.
  • the terminal device can delete the first accompaniment area or the first lyrics area based on the following feasible implementation method: in response to the deletion operation on the first accompaniment area, cancel the display of the first lyrics area corresponding to the first accompaniment area in the second area, or, in response to the deletion operation on the first lyrics area, cancel the display of the first accompaniment area corresponding to the first lyrics area in the first area.
  • the terminal device cancels display of the first lyrics area corresponding to the first accompaniment lyrics area in the second area.
  • the prelude accompaniment area in the first area is associated with the prelude lyrics area in the second area
  • the verse accompaniment area in the first area is associated with the verse lyrics area in the second area. If the user deletes the prelude accompaniment area in the first area, the terminal device cancels display of the prelude lyrics area in the second area. If the user deletes the verse accompaniment area in the first area, the terminal device cancels display of the verse lyrics area in the second area.
  • the terminal device cancels display of the first accompaniment area corresponding to the first lyrics area in the first area.
  • the prelude lyrics area in the second area is associated with the prelude accompaniment area in the first area
  • the outro lyrics area in the second area is associated with the outro accompaniment area in the first area. If the user deletes the prelude lyrics area in the second area, the terminal device cancels display of the prelude accompaniment area in the first area. If the user deletes the outro lyrics area in the second area, the terminal device cancels display of the outro accompaniment area in the first area.
  • FIG6A is a schematic diagram of deleting a first lyrics area and a first accompaniment area provided by an embodiment of the present disclosure.
  • the display page of the terminal device includes a first page, and the first page includes a first area and a second area.
  • the first area includes an accompaniment area for a verse and an accompaniment area for a chorus
  • the accompaniment area for the verse includes the accompaniment of the verse
  • the accompaniment area for the chorus includes the accompaniment of the chorus
  • the second area includes a lyrics area for the verse and a lyrics area for the chorus
  • the lyrics area for the verse includes the text "verse”
  • the lyrics area for the chorus includes the text "chorus”.
  • FIG6B is a schematic diagram of deleting a first accompaniment area and a first lyrics area provided by an embodiment of the present disclosure.
  • the display page of the terminal device includes a first page, and the first page includes a first area and a second area.
  • the first area includes an accompaniment area for a verse and an accompaniment area for a chorus
  • the accompaniment area for the verse includes the accompaniment of the verse
  • the accompaniment area for the chorus includes the accompaniment of the chorus
  • the second area includes a lyrics area for the verse and a lyrics area for the chorus
  • the lyrics area for the verse includes the text “verse”
  • the lyrics area for the chorus includes the text “chorus”.
  • the disclosed embodiment provides an audio processing method, wherein a terminal device can display a first page including a first area and a second area, and in response to a trigger operation on the first area, display a first accompaniment area in the first area, and display a first lyrics area corresponding to the first accompaniment area in the second area, or, in response to a trigger operation on the second area, display the first lyrics area in the second area, and display the first accompaniment area corresponding to the first lyrics area in the first area.
  • the terminal device can display content associated with the editing operation in another area, thereby reducing the complexity of music creation and improving the efficiency of music creation.
  • the following in combination with FIG7 , describes in detail a method for displaying a first accompaniment area in the first area and a first lyrics area corresponding to the first accompaniment area in the second area in response to a trigger operation on the first area in the above-mentioned audio processing method.
  • FIG7 is a schematic diagram of a method for displaying a first accompaniment area and a first lyrics area provided by an embodiment of the present disclosure.
  • the first area includes a first audio track. Please refer to FIG7 .
  • the method flow includes:
  • S701 In response to a touch operation on a first audio track, display an accompaniment style window in a first area.
  • the first area may include a first audio track.
  • the first area may include a first audio track associated with the arrangement beat.
  • the accompaniment style window includes multiple accompaniment style controls.
  • the accompaniment style window includes an accompaniment style control A and an accompaniment style control B, and each accompaniment style control can be associated with an accompaniment style.
  • the accompaniment style window may include a "popular" control, an "electronic music” control, and a "rock” control, wherein the accompaniment style corresponding to the "popular" control is a pop style, the accompaniment style corresponding to the "electronic music” control is an electronic music style, and the accompaniment style corresponding to the "rock” control is a rock style.
  • an accompaniment style window including multiple accompaniment style controls may pop up in the first area of the first page.
  • the accompaniment style window may be in the first area or in other areas of the first page, and the present disclosed embodiment is not limited to this.
  • FIG8 is a schematic diagram of a process of displaying an accompaniment style window provided by an embodiment of the present disclosure.
  • the display page of the terminal device includes a first page, the first page includes a first area and a second area, and the first area includes a first track.
  • an accompaniment style window pops up on the right side of the first area, wherein the accompaniment style window includes rock controls, folk controls, classical controls, and pop controls.
  • S702 In response to a touch operation on the accompaniment style control, determine a target accompaniment style.
  • the target accompaniment style is the style of the accompaniment associated with the first accompaniment area.
  • the accompaniment style window includes a control for accompaniment style A and a control for accompaniment style B.
  • the terminal device determines that the target accompaniment style is accompaniment style A, and the style of the accompaniment associated with the first accompaniment area is accompaniment style A.
  • the terminal device determines that the target accompaniment style is accompaniment style B, and the style of the accompaniment associated with the first accompaniment area is accompaniment style B.
  • the terminal device can intelligently generate an accompaniment associated with the first accompaniment area based on the target accompaniment style. For example, if the user clicks the rock style control in the accompaniment style window, the style of the accompaniment associated with the first accompaniment area generated by the terminal device is rock style, and if the user clicks the electronic music style control, the style of the accompaniment associated with the first accompaniment area generated by the terminal device is the first accompaniment of electronic music style.
  • FIG9 is a schematic diagram of a process for determining a target accompaniment style provided by an embodiment of the present disclosure.
  • the display page of the terminal device includes a first page, and the first page includes a first area and a second area.
  • the first area includes a first audio track, and an accompaniment style window pops up on the right side of the first area.
  • the accompaniment style window includes a rock control, a folk control, a classical control, and a pop control.
  • the terminal device can determine that the target accompaniment style is a rock style.
  • S703 In response to a touch operation on the first music track, display a first accompaniment area on the first music track.
  • the first accompaniment area includes an accompaniment of a target accompaniment style.
  • the terminal device may display a note diagram of the accompaniment associated with the first accompaniment area on the first track, wherein the accompaniment style indicated by the note diagram is the target accompaniment style.
  • the terminal device may display a first accompaniment area on the first audio track based on the following feasible implementation: in response to a touch operation on the first audio track, an accompaniment adding window is displayed.
  • the accompaniment adding window includes an accompaniment paragraph control, and the accompaniment paragraph is the position of a section of accompaniment in the entire accompaniment.
  • the accompaniment paragraph may include sections such as the prelude, the verse, the chorus, and the outro
  • the accompaniment adding window may include a prelude control, a verse control, a chorus control, and an outro control.
  • the terminal device may display the accompaniment adding window.
  • the first accompaniment area is displayed on the first track.
  • the accompaniment paragraph associated with the first accompaniment area is the same as the accompaniment paragraph corresponding to the accompaniment paragraph control.
  • the accompaniment paragraph window includes a prelude control and a verse control.
  • the prelude control the first accompaniment area generated by the terminal device is the accompaniment area of the prelude, and the accompaniment of the prelude is displayed in the accompaniment area of the prelude.
  • the verse control the first accompaniment area generated by the terminal device is the accompaniment area of the verse, and the accompaniment of the verse is displayed in the accompaniment area of the verse.
  • the terminal device can display the accompaniment area of the verse on the first track of the first area, and the accompaniment area of the verse includes the accompaniment of the verse. If the user clicks the chorus control, the terminal device can display the accompaniment area of the chorus on the first track of the first area, and the accompaniment area of the chorus includes the accompaniment of the chorus.
  • the first accompaniment area further includes an accompaniment display control
  • the accompaniment display area includes an amplitude waveform corresponding to the accompaniment associated with the first accompaniment area.
  • the first accompaniment area displays the accompaniment associated with the first accompaniment area through the accompaniment display area.
  • the accompaniment display area may include a note graph, a spectrum graph, etc. corresponding to the accompaniment associated with the first accompaniment area.
  • the size of the accompaniment display area and the first accompaniment area may be the same or different, and the embodiments of the present disclosure do not limit this.
  • the size of the accompaniment display area is adjusted, and the amplitude waveform is adjusted.
  • the terminal device can adjust the size of the accompaniment display area in response to the sliding operation on the edge of the accompaniment display area. It should be noted that when the size of the accompaniment display area is adjusted, the amplitude waveform in the accompaniment display control will also change.
  • FIG10 is a schematic diagram of a process of displaying a first accompaniment area provided by an embodiment of the present disclosure.
  • a terminal device is included.
  • the display page of the terminal device includes a first page, the first page includes a first area and a second area, and the first area includes a first audio track.
  • an accompaniment style window pops up on the right side of the first area, and the accompaniment style window includes rock controls, folk controls, classical controls, and pop controls.
  • the terminal device determines that the target style control is rock style.
  • the first area may pop up an accompaniment adding window, wherein the accompaniment adding window includes a verse control and a prelude control.
  • the terminal device may display the verse control in the first area.
  • the accompaniment area of the verse song includes an audio display area corresponding to the accompaniment of the verse song, and the audio display area includes the amplitude waveform of the accompaniment of the rock-style verse song.
  • the first audio track in Figure 10 only shows the accompaniment area of the main song. If the first audio track also includes an audio display area for the chorus accompaniment and an audio display area for the prelude accompaniment, if the size of any audio display area is adjusted, the amplitude waveform in each audio display area will change.
  • S704 Display the first lyrics area corresponding to the first accompaniment area in the second area.
  • the terminal device may display the first lyrics area corresponding to the first accompaniment area in the second area. For example, if the terminal device displays the accompaniment area of the prelude in the first area, the terminal device may display the lyrics area of the prelude in the second area; if the terminal device displays the accompaniment area of the verse in the first area, the terminal device may display the lyrics area of the verse in the second area.
  • the disclosed embodiment provides a method for displaying a first accompaniment area and a first lyrics area, in response to a touch operation on a first audio track, an accompaniment style window is displayed in the first area, in response to a touch operation on an accompaniment style control in the accompaniment style window, a target accompaniment style is determined, in response to a touch operation on the first audio track, the first accompaniment area is displayed on the first audio track, and the first lyrics area corresponding to the first accompaniment area is displayed in the second area.
  • the terminal device can display the first accompaniment area and generate an accompaniment associated with the first accompaniment area, and after the user adds the first accompaniment area in the first area, the terminal device can display the first lyrics area corresponding to the first accompaniment area in the second area, thereby reducing the complexity of music creation and improving the efficiency of music creation.
  • FIG11 is a schematic diagram of a method for displaying a first lyrics area and a first accompaniment area provided by an embodiment of the present disclosure.
  • the first lyrics area includes a lyrics paragraph title and lyrics. Please refer to FIG11 .
  • the method flow includes:
  • the second area may include a first control, and when the user clicks the first control, the second area may display a lyrics paragraph window.
  • the user may input voice information generated by the lyrics paragraph (e.g., voice information for generating a prelude title) into the terminal device, and the terminal device generates a corresponding lyrics paragraph title in the second area based on the voice information.
  • the lyrics paragraph window includes lyrics paragraph controls.
  • the lyrics paragraph window includes lyrics paragraph controls A and lyrics paragraph controls B, and each lyrics paragraph control can be associated with a title of a lyrics paragraph.
  • the lyrics paragraph window can include a "prelude” control, a "verse” control, and a "chorus” control, wherein the lyrics paragraph title associated with the "prelude” control is the prelude, the lyrics paragraph title associated with the "verse” control is the verse, and the lyrics paragraph title associated with the "chorus” control is the chorus.
  • FIG12 is a schematic diagram of a process of displaying a text title window provided by an embodiment of the present disclosure.
  • the display page of the terminal device includes a first page, and the first page includes a first area and a second area.
  • the second area includes a first control.
  • the lyrics paragraph window includes controls for the prelude paragraph and controls for the main song paragraph.
  • the second area may include multiple first controls, which is not limited in the present embodiment.
  • the terminal device may also display multiple lyrics paragraph titles (such as prelude, verse, chorus and outing, etc.) in the second area according to music theory. This facilitates users to create music and improves the efficiency of music creation.
  • S1102 In response to a touch operation on a lyrics paragraph control, display a first lyrics area in a second area.
  • the first lyrics area includes a lyrics section title associated with the lyrics section control. For example, if the user clicks the control of the verse section, the first lyrics area includes the title of the verse section, and if the user clicks the control of the prelude section, the first lyrics area includes the title of the prelude.
  • Figure 13 is a schematic diagram of a process for displaying a first lyrics area provided by an embodiment of the present disclosure.
  • the display page of the terminal device includes a first page, and the first page includes a first area and a second area.
  • the second area includes a first control.
  • the lyrics paragraph window includes controls for the prelude paragraph and controls for the main song paragraph.
  • the terminal device determines that the lyrics paragraph is a prelude paragraph, and the terminal device cancels the display of the lyrics paragraph window, and displays the lyrics paragraph title "Prelude" at the first control, and displays the accompaniment area of the prelude in the first area, and the accompaniment area of the prelude includes the accompaniment of the prelude.
  • S1103 Display a first accompaniment area corresponding to the first lyrics area in the first area.
  • the terminal device may display the first accompaniment area corresponding to the first lyrics area in the first area. For example, if the first lyrics area displayed by the terminal device in the second area is the lyrics area of the prelude, the terminal device may display the accompaniment area of the prelude in the first area; if the first lyrics area displayed by the terminal device in the second area is the lyrics area of the main song, the terminal device may display the accompaniment area of the main song in the first area.
  • the terminal device when the terminal device displays the first accompaniment area corresponding to the first lyrics area, if the terminal device has determined the target accompaniment style selected by the user, the terminal device may include the accompaniment of the target accompaniment style in the first accompaniment area displayed in the first area; if the terminal device has not determined the target accompaniment style, the terminal device may display an accompaniment style window, and when the user determines the target accompaniment style, the accompaniment of the target accompaniment style is displayed in the first accompaniment area.
  • the method for the terminal device to determine the target accompaniment style may refer to the embodiment shown in Figure 7, and the embodiments of the present disclosure will not be repeated here.
  • S1104 In response to an editing operation on a target area in the first lyrics area, display a lyrics window, where the lyrics window includes at least one section of lyrics.
  • the first lyrics area also includes a target area associated with the lyrics paragraph title.
  • the target area may be the lower side of the lyrics paragraph title, or the target area may be the right side of the lyrics paragraph title, which is not limited in the embodiment of the present disclosure.
  • the editing operation may be a touch operation, a voice operation, or a text input operation, which is not limited in the embodiments of the present disclosure.
  • the editing operation may be a user inputting the text "raining" in the target area, or the editing operation may be a user's touch operation and voice operation on the target area (e.g., the touch operation is a long press operation, and the voice operation is inputting the voice "raining").
  • the lyrics window includes at least one paragraph of lyrics.
  • at least one paragraph of lyrics is associated with an editing operation. For example, if the editing operation is to input the text "raining”, the lyrics displayed in the lyrics window are "raining". For example, if the editing operation is to input the text "raining", the terminal device can generate lyrics associated with raining and display the lyrics in the lyrics window. In this way, the terminal device can intelligently generate lyrics to reduce the complexity of music creation.
  • the target lyrics are displayed in the target area.
  • the lyrics window includes lyrics A and lyrics B. If the user clicks on lyrics A, the terminal device displays lyrics A in the target area, and if the user clicks on lyrics B, the terminal device displays lyrics B in the target area.
  • the lyrics displayed in the target area can be modified. For example, if the lyrics displayed in the target area are "Hello", the user can modify the lyrics “Hello” to "Goodbye” through the modification operation. In this way, when creating music, the user can flexibly modify the intelligent lyrics recommended by the terminal device, thereby improving the flexibility of music creation.
  • the terminal device can display at least one lyric associated with the editing operation in the target area, and the user can also directly input the relevant lyrics into the target area through the terminal device, which is not limited in the embodiment of the present disclosure.
  • the terminal device can generate lyrics associated with the editing operation. If the creative ability is high, the lyrics created by the music creator can be directly input into the target area. In this way, users can create music intelligently and personally, reduce the complexity of music creation, and improve the efficiency of music creation.
  • FIG14 is a schematic diagram of a process for displaying lyrics provided by an embodiment of the present disclosure.
  • the display page of the terminal device includes a first page, and the first page includes a first area and a second area.
  • the second area includes a first control.
  • the first control When the user clicks the first control, the second area displays a lyrics paragraph window, wherein the text title window includes controls for the prelude paragraph and controls for the verse paragraph.
  • the terminal device determines that the lyrics paragraph title is the prelude, the terminal device cancels the display of the lyrics paragraph window, and displays the lyrics paragraph title "Prelude” at the first control, and displays the accompaniment area of the prelude in the first area, and the accompaniment area of the prelude includes the accompaniment of the prelude.
  • the terminal device can display the lyrics window in the second area.
  • the lyrics window includes the lyrics “Rainy days are beautiful” and the lyrics “Walking in the rainy day” (the lyrics are associated with the input text "Raining”).
  • the terminal device cancels the display of the lyrics window and displays the lyrics "Rainy days are beautiful” in the target area. This paragraph of lyrics is the lyrics of the prelude.
  • the terminal device can recommend lyrics suitable for the accompaniment style to the user based on the key content, thereby reducing the complexity of music creation and improving the efficiency of music creation.
  • the disclosed embodiment provides a method for displaying a first lyrics area and a first accompaniment area, in response to a touch operation on the second area, a lyrics paragraph window is displayed in the second area, in response to a touch operation on a lyrics paragraph control in the lyrics paragraph window, the first lyrics area is displayed in the second area, and the first accompaniment area corresponding to the first lyrics area is displayed in the first area, in response to an edit operation on a target area in the first lyrics area, the lyrics window is displayed, and in response to a touch operation on target lyrics in at least one section of lyrics in the lyrics window, target lyrics are displayed in the target area.
  • the terminal device can display the first accompaniment area corresponding to the first lyrics area in the second area, reducing the complexity of music creation, and in response to an edit operation on the first lyrics area, the terminal device can automatically generate lyrics, improving the efficiency of music creation.
  • the above audio processing method also includes a method for displaying the first voice input by the user.
  • the method for displaying the first voice is described in detail in conjunction with Figure 15.
  • FIG15 is a schematic diagram of a method for displaying a first voice provided by an embodiment of the present disclosure.
  • the first area also includes a second audio track. Please refer to FIG15 .
  • the method flow includes:
  • the sound effect window includes sound effect controls.
  • the sound effect window may include reverberation controls and electronic music controls, etc.
  • the second audio track is used to display the voice input by the user.
  • the second audio track can display the spectrum graph or note graph corresponding to the voice segment.
  • the terminal device in response to a touch operation on the second audio track, can display the sound effect window in the first page.
  • the terminal device can display the sound effect window in the first area
  • the terminal device can display the sound effect window in the second area
  • the terminal device can display the sound effect window in other areas of the first page, and the embodiments of the present disclosure are not limited to this.
  • FIG16 is a schematic diagram of a process of displaying a sound effect window provided by an embodiment of the present disclosure.
  • the display page of the terminal device includes a first page, and the first page includes a first area and a second area.
  • the second area includes the lyrics paragraph title "Prelude” and the lyrics "It's cool on a rainy day”.
  • the first area includes a first audio track and a second audio track, and the first audio track includes an accompaniment area of the prelude, and the accompaniment area of the prelude includes the accompaniment of the prelude.
  • a sound effect window may pop up on the right side of the first area.
  • the sound effect window includes electronic music controls, equalization controls, and mixing controls.
  • the electronic music controls can modify the timbre of the voice input by the user to The timbre of electronic music
  • the equalizer control can modify the timbre of the voice input by the user to a balanced timbre
  • the mixing control can modify the timbre of the voice input by the user to a mixed timbre.
  • the terminal device includes multiple music creation functions, and users can create music in a personalized and diversified manner, thereby improving user experience and improving the efficiency of music creation.
  • S1502 Determine a target sound effect in response to a touch operation on a sound effect control.
  • the sound effect window includes at least one sound effect control, and when the user clicks the sound effect control, the terminal device can determine the target sound effect.
  • the sound effect window includes a mixing control and an electronic sound control, and if the user clicks the mixing control, the target sound effect is mixing, and if the user clicks the electronic sound control, the target sound effect is electronic sound.
  • the terminal device may display a track adding control in the first area, and in response to a touch operation on the track adding control, display a track associated with the second track in the first area.
  • the terminal device may display a track adding control in the lower area of the second track, and when the user clicks on the track adding control, the terminal device may display another track in the lower area of the second track, the function of which is the same as that of the second track, and when the track is used to display the voice input by the user, the sound effect may be reselected, or the same sound effect as the second track may be used, which is not limited in the embodiments of the present disclosure.
  • FIG17 is a schematic diagram of adding an audio track associated with a second audio track provided by an embodiment of the present disclosure.
  • the display page of the terminal device includes a first page, and the first page includes a first area and a second area.
  • the second area includes the lyrics paragraph title "Prelude” and the lyrics "It's cool on a rainy day.”
  • the first area includes the first audio track, the second audio track and the sound effect window, the first audio track includes the accompaniment area of the prelude, the accompaniment area of the prelude includes the accompaniment of the prelude, and the sound effect window includes electronic music controls, equalization controls and mixing controls.
  • the terminal device cancels the display of the sound effect window and determines that the sound effect of the second track is the sound effect of electronic music.
  • the terminal device displays the track adding control below the second track.
  • the terminal device can display track A, where the function of track A is the same as that of the second track. In this way, when the user is creating music, multiple tracks with different sound effects can be created, thereby improving the flexibility of music creation.
  • S1503 In response to a voice operation input by the user, display a first voice associated with the voice operation on a second audio track.
  • the voice trigger operation can be a voice input by the user.
  • the user can sing according to the accompaniment in the first accompaniment area and the lyrics in the first lyrics area, and the terminal device can obtain the content of the user's singing and display the note map corresponding to the user's voice in the second audio track.
  • the sound effect associated with the timbre in the first voice is the target sound effect.
  • the timbre in the music sung by the user is the timbre of electronic music; if the target sound effect of the second audio track is mixed sound, the timbre in the music sung by the user is the timbre of mixed sound.
  • the terminal device can display other voices different from the sound effect of the first voice in the audio track associated with the second audio track, which can improve the flexibility of audio editing.
  • the disclosed embodiment provides a method for displaying a first voice, in response to a touch operation on a second audio track, displaying a sound effect window, in response to a touch operation on an audio effect control in the sound effect window, determining a target sound effect, and in response to a voice operation for input, displaying the first voice associated with the voice trigger operation on the second audio track.
  • the terminal device determines the accompaniment and lyrics, the terminal device can display the content sung by the user in the first area, thereby improving the effect of music creation.
  • FIG18 is a schematic diagram of the structure of an audio processing device provided by an embodiment of the present disclosure.
  • the audio processing device 180 includes a display module 181 and a response module 182, wherein:
  • the display module 181 is used to display a first page, the first page includes a first area and a second area, the first area is associated with audio editing, and the second area is associated with text editing;
  • the response module 182 is used for displaying a first accompaniment area in the first area and a first lyrics area in the second area in response to an editing operation on the first area or the second area.
  • the response module 182 is specifically used to:
  • the first accompaniment area is displayed in the first area, and a first lyrics area corresponding to the first accompaniment area is displayed in the second area;
  • the first lyrics area is displayed in the second area, and a first accompaniment area corresponding to the first lyrics area is displayed in the first area.
  • the response module 182 is specifically used to:
  • the first accompaniment area is displayed on the first music track, and the first accompaniment area includes an accompaniment of a target accompaniment style.
  • the response module 182 is specifically used to:
  • an accompaniment adding window is displayed, wherein the accompaniment adding window includes an accompaniment section control, wherein the accompaniment section is the position of an accompaniment in the entire accompaniment;
  • the first accompaniment area is displayed on the first audio track, and the accompaniment passage associated with the first accompaniment area is the same as the accompaniment passage corresponding to the accompaniment passage control.
  • the first accompaniment area further includes an accompaniment display area
  • the accompaniment display area includes an amplitude waveform corresponding to the accompaniment associated with the first accompaniment area
  • the response module 182 is specifically used to:
  • a size of the accompaniment display area is adjusted, and the amplitude waveform is adjusted.
  • the response module 182 is specifically used to:
  • the first lyrics area is displayed in the second area, wherein the first lyrics area includes a lyrics paragraph title associated with the lyrics paragraph control.
  • the response module 182 is specifically used to:
  • the lyrics window In response to an editing operation on the target area in the first lyrics area, displaying a lyrics window, the lyrics window including at least one section of lyrics, the at least one section of lyrics being associated with the editing operation;
  • the target lyric In response to a touch operation on a target lyric in the at least one section of lyrics, the target lyric is displayed in the target area.
  • the response module 182 is specifically used to:
  • the first accompaniment area corresponding to the first lyrics area is undisplayed in the first area.
  • the response module 182 is specifically used to:
  • a first voice associated with the voice operation is displayed in the second audio track, and a sound effect associated with the timbre in the first voice is the target sound effect.
  • the audio processing device provided in the embodiment of the present disclosure may be used to execute the technical solution of the above method embodiment, and its implementation principle and technical effect are similar, which will not be described in detail in this embodiment.
  • FIG19 is a schematic diagram of the structure of another audio processing device provided by an embodiment of the present disclosure.
  • the audio processing device 180 further includes an adding module 183, and the adding module 183 is used to:
  • an audio track associated with the second audio track is displayed in the first area.
  • the audio processing device provided in the embodiment of the present disclosure may be used to execute the technical solution of the above method embodiment, and its implementation principle and technical effect are similar, which will not be described in detail in this embodiment.
  • the embodiments of the present disclosure further provide a computer-readable storage medium, in which computer-executable instructions are stored.
  • a processor executes the computer-executable instructions
  • the processor executes the methods described in the above-mentioned method embodiments.
  • the embodiments of the present disclosure also provide a computer program, which, when executed by a processor, implements the methods described in the above-mentioned various method embodiments.
  • the embodiments of the present disclosure further provide a computer program product, including a computer program, which implements the methods described in the above-mentioned various method embodiments when executed by a processor.
  • the present disclosure provides an audio processing method, apparatus and terminal device, wherein the terminal device can display a first page, the first page includes a first area and a second area, wherein the first area is associated with audio editing, and the second area is associated with text editing, and in response to an editing operation on the first area or the second area, a first accompaniment area is displayed in the first area, and a first lyrics area is displayed in the second area.
  • the terminal device when a user performs an editing operation on the first area, the terminal device can display the first accompaniment area in the first area, and display the first lyrics area associated with the first accompaniment area in the second area; when a user performs an editing operation on the second area, the terminal device can display the first lyrics area in the second area, and display the first accompaniment area corresponding to the first lyrics area in the first area. Therefore, when a user performs an editing operation in any area, the terminal device can display the associated content in another area, thereby reducing the complexity of operations during music creation, thereby reducing the complexity of music creation, and improving the efficiency of music creation.
  • FIG20 is a schematic diagram of the structure of a terminal device provided by an embodiment of the present disclosure.
  • the terminal device 2000 may be a terminal device or a server.
  • the terminal device may include but is not limited to mobile terminals such as mobile phones, laptop computers, digital broadcast receivers, personal digital assistants (PDAs), tablet computers (PADs), portable multimedia players (PMPs), vehicle-mounted terminals (such as vehicle-mounted navigation terminals), etc., and fixed terminals such as digital TVs, desktop computers, etc.
  • PDAs personal digital assistants
  • PADs tablet computers
  • PMPs portable multimedia players
  • vehicle-mounted terminals such as vehicle-mounted navigation terminals
  • fixed terminals such as digital TVs, desktop computers, etc.
  • the terminal device shown in FIG20 is only an example and should not bring any limitations to the functions and scope of use of the embodiments of the present disclosure.
  • the terminal device 2000 may include a processing device (e.g., a central processing unit, a graphics processing unit, etc.) 2001, which may perform various appropriate actions and processes according to a program stored in a read-only memory (ROM) 2002 or a program loaded from a storage device 2008 to a random access memory (RAM) 2003.
  • ROM read-only memory
  • RAM random access memory
  • Various programs and data required for the operation of the terminal device 2000 are also stored in the RAM 2003.
  • the processing device 2001, the ROM 2002, and the RAM 2003 are connected to each other via a bus 2004.
  • An input/output (I/O) interface 2005 is also connected to the bus 2004.
  • the following devices may be connected to the I/O interface 2005: input devices 2006 including, for example, a touch screen, a touchpad, a keyboard, a mouse, a camera, a microphone, an accelerometer, a gyroscope, etc.; output devices 2007 including, for example, a liquid crystal display (LCD), a speaker, a vibrator, etc.; storage devices 2008 including, for example, a magnetic tape, a hard disk, etc.; and communication devices 2009.
  • the communication device 2009 may allow the terminal device 2000 to communicate with other devices wirelessly or by wire to exchange data.
  • FIG. 20 shows a terminal device 2000 with various devices, it should be understood that it is not required to implement or have all the devices shown. More or fewer devices may be implemented or have alternatively.
  • an embodiment of the present disclosure includes a computer program product, which includes a computer program borne on a computer readable medium.
  • a computer program includes a program code for executing the method shown in the flowchart.
  • the computer program can be downloaded and installed from the network through the communication device 2009, or installed from the storage device 2008, or installed from the ROM 2002.
  • the processing device 2001 the above functions defined in the method of the embodiment of the present disclosure are executed.
  • the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the above two.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device or device, or any combination of the above.
  • Computer-readable storage media may include, but are not limited to: an electrical connection with one or more wires, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory for short), an optical fiber, a portable compact disk read-only memory (CD-ROM for short), an optical storage device, a magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium containing or storing a program that can be used by or in combination with an instruction execution system, device or device.
  • a computer-readable signal medium may include a data signal propagated in a baseband or as part of a carrier wave, which carries a computer-readable program code. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above.
  • Computer-readable signal media may also be any computer-readable medium other than computer-readable storage media, which may send, propagate, or transmit programs for use by or in conjunction with an instruction execution system, apparatus, or device.
  • the program code contained on the computer-readable medium may be transmitted using any appropriate medium, including but not limited to: wires, optical cables, RF (Radio Frequency), etc., or any suitable combination of the above.
  • the computer-readable medium may be included in the terminal device, or may exist independently without being installed in the terminal device.
  • the computer-readable medium carries one or more programs.
  • the terminal device executes the method shown in the above embodiment.
  • Computer program code for performing the operations of the present disclosure may be written in one or more programming languages or a combination thereof, including object-oriented programming languages such as Java, Smalltalk, C++, and conventional procedural programming languages such as "C" or similar programming languages.
  • the program code may be executed entirely on the user's computer, partially on the user's computer, as a separate software package, partially on the user's computer and partially on a remote computer, or entirely on a remote computer or server.
  • the remote computer may be connected to the user's computer via any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (e.g., via the Internet using an Internet service provider).
  • LAN Local Area Network
  • WAN Wide Area Network
  • each square box in the flow chart or block diagram can represent a module, a program segment or a part of a code, and the module, the program segment or a part of the code contains one or more executable instructions for realizing the specified logical function.
  • the functions marked in the square box can also occur in a sequence different from that marked in the accompanying drawings. For example, two square boxes represented in succession can actually be executed substantially in parallel, and they can sometimes be executed in the opposite order, depending on the functions involved.
  • each square box in the block diagram and/or flow chart, and the combination of the square boxes in the block diagram and/or flow chart can be implemented with a dedicated hardware-based system that performs a specified function or operation, or can be implemented with a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments described in the present disclosure may be implemented by software or hardware.
  • the name of a unit does not limit the unit itself in some cases.
  • the first acquisition unit may also be described as a "unit for acquiring at least two Internet Protocol addresses".
  • exemplary types of hardware logic components include: Field Programmable Gate Array (FPGA), Application Specific Integrated Circuit (ASIC), Application Specific Standard Product (ASSP), System On Chip (SOC), Complex Programmable Logic Device (CPLD), etc.
  • FPGA Field Programmable Gate Array
  • ASIC Application Specific Integrated Circuit
  • ASSP Application Specific Standard Product
  • SOC System On Chip
  • CPLD Complex Programmable Logic Device
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, device, or equipment.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or equipment, or any suitable combination of the foregoing.
  • a more specific example of a machine-readable storage medium may include an electrical connection based on one or more lines, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read-only memory
  • EPROM or flash memory erasable programmable read-only memory
  • CD-ROM portable compact disk read-only memory
  • CD-ROM compact disk read-only memory
  • magnetic storage device or any suitable combination of the foregoing.
  • a prompt message is sent to the user to clearly prompt the user that the operation requested to be performed will require obtaining and using the user's personal information.
  • the user can autonomously choose whether to provide personal information to software or hardware such as a terminal device, application, server, or storage medium that performs the operation of the technical solution of the present disclosure according to the prompt message.
  • the prompt information in response to receiving an active request from the user, may be sent to the user in the form of a pop-up window, in which the prompt information may be presented in text form.
  • the pop-up window may also carry a selection control for the user to choose "agree” or “disagree” to provide personal information to the terminal device.
  • the data involved in this technical solution shall comply with the requirements of the relevant laws and regulations.
  • the data may include information, parameters and messages, such as flow switching indication information.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Electrophonic Musical Instruments (AREA)

Abstract

本公开提供一种音频处理方法、装置及终端设备,该方法包括:显示第一页面,所述第一页面包括第一区域和第二区域,所述第一区域与音频编辑相关联,所述第二区域与文本编辑相关联;响应于对所述第一区域或所述第二区域的编辑操作,在所述第一区域显示第一伴奏区域,并在所述第二区域显示第一歌词区域。

Description

音频处理方法、装置及终端设备
相关申请交叉引用
本申请要求于2022年10月20日提交中国专利局、申请号为202211289254.0、发明名称为“音频处理方法、装置及终端设备”的中国专利申请的优先权,其全部内容通过引用并入本文。
技术领域
本公开实施例涉及音频处理技术领域,尤其涉及一种音频处理方法、装置及终端设备。
背景技术
音乐创作者可以使用音乐类应用程序进行音乐创作。例如,音乐创作者可以通过音乐类应用程序为一段音频添加音频特效。
目前,音乐创作者可以在音乐类应用程序中添加一段创作的编曲,并通过音乐类应用程序在该编曲中添加相关联的音效、歌词等音素。但是,编曲和歌词的创作难度较高,现有的音频编辑功能有限,对音乐创作者的要求较高,音乐创作者无法简单的进行音乐创作,音乐创作的效率较低。
发明内容
本公开提供一种音频处理方法、装置及终端设备,用于解决现有技术中音乐创作的效率较低的技术问题。
第一方面,本公开提供一种音频处理方法,该音频处理方法包括:
显示第一页面,所述第一页面包括第一区域和第二区域,所述第一区域与音频编辑相关联,所述第二区域与文本编辑相关联;
响应于对所述第一区域或所述第二区域的编辑操作,在所述第一区域显示第一伴奏区域,并在所述第二区域显示第一歌词区域。
第二方面,本公开提供一种音频处理装置,该音频处理装置包括显示模块和响应模块,其中:
所述显示模块用于,显示第一页面,所述第一页面包括第一区域和第二区域,所述第一区域与音频编辑相关联,所述第二区域与文本编辑相关联;
所述响应模块用于,响应于对所述第一区域或所述第二区域的编辑操作,在所述第一区域显示第一伴奏区域,并在所述第二区域显示第一歌词区域。
第三方面,本公开实施例提供一种终端设备,包括:处理器和存储器;
所述存储器存储计算机执行指令;
所述处理器执行所述存储器存储的计算机执行指令,使得所述至少一个处理器执行如上第一方面以及第一方面各种可能涉及的所述音频处理方法。
第四方面,本公开实施例提供一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如上第一方面以及第一方面各种可能涉及的所述音频处理方法。
第五方面,本公开实施例提供一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时实现如上第一方面以及第一方面各种可能涉及的所述音频处理方法。
第六方面,本公开实施例提供一种计算机程序,该计算机程序被处理器执行时实现如上述第一方面以及第一方面各种可能涉及的所述音频处理方法。
附图说明
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本公开实施例提供的一种应用场景示意图;
图2为本公开实施例提供的一种音频处理方法的流程示意图;
图3为本公开实施例提供的一种显示第一页面的过程示意图;
图4为本公开实施例提供一种显示第一伴奏区域和第一歌词区域的过程示意图;
图5为本公开实施例提供的一种显示第一歌词区域和第一伴奏区域的示意图;
图6A为本公开实施例提供的一种删除第一歌词区域和第一伴奏区域的示意图;
图6B为本公开实施例提供的一种删除第一伴奏区域和第一歌词区域的示意图;
图7为本公开实施例提供的一种显示第一伴奏区域和第一歌词区域的示意图;
图8为本公开实施例提供的一种显示伴奏风格窗口的过程示意图;
图9为本公开实施例提供的一种确定目标伴奏风格的过程示意图;
图10为本公开实施例提供的一种显示第一伴奏区域的过程示意图;
图11为本公开实施例提供的一种显示第一歌词区域和第一伴奏区域的示意图;
图12为本公开实施例提供的一种显示文本标题窗口的过程示意图;
图13为本公开实施例提供的一种显示第一歌词区域的过程示意图;
图14为本公开实施例提供的一种显示歌词的过程示意图;
图15为本公开实施例提供的一种显示第一语音的方法示意图;
图16为本公开实施例提供的一种显示音效窗口的过程示意图;
图17为本公开实施例提供的一种添加第二音轨相关联的音轨示意图;
图18为本公开实施例提供的一种音频处理装置的结构示意图;
图19为本公开实施例提供的另一种音频处理装置的结构示意图;以及,
图20为本公开实施例提供的一种终端设备的结构示意图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。
为了便于理解,下面,对本公开实施例涉及的概念进行说明。
终端设备:是一种具有无线收发功能的设备。终端设备可以部署在陆地上,包括室内或室外、手持、穿戴或车载;也可以部署在水面上(如轮船等)。所述终端设备可以是手机(mobile phone)、平板电脑(Portable Android Device,PAD)、带无线收发功能的电脑、虚拟现实(virtual reality,VR)终端设备、增强现实(augmented reality,AR)终端设备、工业控制(industrial control)中的无线终端、车载终端设备、无人驾驶(self driving)中的无线终端、远程医疗(remote medical)中的无线终端设备、智能电网(smart grid)中的无线终端设备、运输安全(transportation safety)中的无线终端设备、智慧城市(smart city)中的无线终端设备、智慧家庭(smart home)中的无线终端设备、可穿戴终端设备等。本公开实施例所涉及的终端设备还可以称为终端、用户 设备(user equipment,UE)、接入终端设备、车载终端、工业控制终端、UE单元、UE站、移动站、移动台、远方站、远程终端设备、移动设备、UE终端设备、无线通信设备、UE代理或UE装置等。终端设备也可以是固定的或者移动的。
乐理:乐理是音乐理论的简称,包括难度较低的基础理论。例如,乐理可以包括读谱、音程、和弦、节奏、节拍等内容。乐理还可以包括难度较高的理论。例如,乐理可以包括和声、复调、曲式、旋律、配器法等内容。
编曲:编曲是结合乐理对音乐进行编配的过程。例如,编曲可以根据音乐的主旋律(节拍)和创作者希望表现出来的作品的风格(欢快、摇滚等),为音乐作品编写伴奏与和声的过程。
在相关技术中,音乐创作者可以创作一段伴奏,并通过音乐类应用程序为该段伴奏添加音效、歌词等音素,进而完成音乐的创作。但是,伴奏和歌词的创作难度较高,音乐创作者需要学习乐理知识,并且现有的音乐编辑功能有限、操作复杂度较高,音乐创作者无法简单的进行音乐创作,音乐创作的效率较低。
为了解决相关技术中的技术问题,本公开实施例提供一种音频处理方法,终端设备可以显示包括与音频编辑相关联的第一区域和与文本编辑相关联的第二区域,响应于对第一区域的触发操作,在第一区域显示第一伴奏区域,并在第二区域显示第一伴奏区域对应的第一歌词区域,或者,响应于对第二区域的触发操作,在第二区域显示第一歌词区域,并在第一区域显示第一歌词区域对应的第一伴奏区域。在上述方法中,音乐创作者对用于文本编辑的第二区域进行文本编辑操作时,终端设备可以在第一区域显示该文本编辑操作相关联的伴奏区域,音乐创作者对用于音频编辑的第一区域进行音频编辑操作时,终端设备可以在第二区域显示该音频编辑操作相关联的歌词区域,这样,用户在任意一个区域进行编辑操作时,终端设备都可以生成并显示伴奏区域和歌词区域,进而降低音乐创作的复杂度,提高音乐创作的效率。
下面,结合图1,对本公开实施例的应用场景进行说明。
图1为本公开实施例提供的一种应用场景示意图。请参见图1,包括:终端设备。其中,终端设备的显示页面为第一页面,第一页面中包括音频编辑相关联的第一区域和文本编辑相关联的第二区域。若终端设备在第二区域显示文本“前奏”,则终端设备可以在第一区域显示前奏对应的伴奏,这样,用户在任意区域进行编辑操作时,终端设备都可以在另一个区域显示对应的内容,降低音乐创作的复杂度,进而提高音乐创作的效率。
需要说明的是,图1只是示例性的示意本公开实施例的应用场景,并非对本公开实施例的应用场景的限定。
下面以具体地实施例对本公开的技术方案以及本公开的技术方案如何解决上述技术问题进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。下面将结合附图,对本公开的实施例进行描述。
图2为本公开实施例提供的一种音频处理方法的流程示意图。请参见图2,该方法可以包括:
S201、显示第一页面。
本公开实施例的执行主体可以为终端设备,也可以为设置在终端设备中的音频处理装置。其中,音频处理装置可以通过软件实现,音频处理装置也可以通过软件和硬件的结合实现。
可选的,第一页面包括第一区域和第二区域。可选的,第一区域与音频编辑相关联,第二区域与文本编辑相关联。可选的,第一区域中可以显示音频。例如,终端设备可以在第一区域中显示伴奏对应的频谱图,终端设备也可以在第一区域中显示伴奏对应的频率。
可选的,第二区域中可以显示文本。例如,终端设备可以在第二区域中显示标题(如,前奏、主歌等),终端设备可以在第二区域中显示歌词,终端设备也可以在第二区域中显示标题和歌词,本公开实施例对此不做限定。
可选的,终端设备可以根据如下可行的实现方式显示第一页面:响应于对浏览器程序的触控操作,显示浏览器页面,在浏览器页面的网址输入区域中输入第一页面相关联的第一网址,并响应于对第一网址的跳转操作,显示第一页面。例如,在用户点击终端设备中的浏览器应用程序时,终端设备可以显示浏览器对应的页面,浏览器页面中包括网址输入区域,用户可以在网址输入区域中输入第一页面相关联的网址,并点击页面跳转的控件,浏览器可以跳转至第一页面,并显示第一页面。
下面,结合图3,对显示第一页面的过程进行说明。
图3为本公开实施例提供的一种显示第一页面的过程示意图。请参见图3,包括:终端设备。其中,终端设备的显示页面中包括浏览器控件。在用户通过鼠标点击浏览器控件时,终端设备显示浏览器页面,浏览器页面中包括网址输入区域。在用户输入第一页面相关联的网址,并点击跳转控件时,浏览器页面可以跳转至第一页面,其中,第一页面中包括第一区域和第二区域。
需要说明的是,在图3所示的实施例中用户可以通过鼠标点击终端设备的显示页面,也可以通过触控的方式点击显示页面,还可以通过语音控制对显示页面进行触发操作,本公开实施例对此不作限定。
S202、响应于对第一区域或第二区域的编辑操作,在第一区域显示第一伴奏区域,并在第二区域显示第一歌词区域。
可选的,响应于对第一区域或第二区域的编辑操作,在第一区域显示第一伴奏区域,在第二区域显示第一歌词区域,有如下两种情况:
情况1:响应于对第一区域的触发操作。
可选的,响应于对第一区域的触发操作,在第一区域显示第一伴奏区域,并在第二区域显示第一伴奏区域对应的第一歌词区域。可选的,第一歌词区域可以包括歌词段落标题和文本内容。例如,歌词段落标题可以为编曲段落的标题,文本内容可以为编曲的歌词。例如,歌词段落标题可以为“前奏”、“主歌”、“副歌”或“尾奏”等标题,文本内容可以为用户任意输入的文本歌词或终端设备智能推荐的歌词。
可选的,对第一区域的触发操作可以包括用户对第一区域的触控操作或语音操作等,本公开实施例对此不作限定。例如,在用户对第一区域进行点击操作时,终端设备可以在第一区域显示第一伴奏区域,并在第二区域显示第一伴奏区域对应的第一歌词区域。
可选的,第一区域中的第一伴奏区域可以包括伴奏。例如,在用户对第一区域内进行点击操作时,第一区域可以显示第一伴奏区域,第一伴奏区域中可以包括伴奏的音符图(显示伴奏的音符)、频谱图(显示伴奏的振幅)等,本公开实施例对此不作限定。
可选的,终端设备可以智能推荐第一伴奏区域相关联的伴奏,终端设备也可以加载外部的伴奏,本公开实施例对此不作限定。需要说明的是,每个第一伴奏区域都有对应的第一歌词区域。例如,若第一伴奏区域为编曲中的前奏区域,则该第一伴奏区域对应的第一歌词的歌词段落标题为文本“前奏”,该第一歌词区域中的文本内容为该前奏的歌词。
下面,结合图4,对该种情况中显示第一伴奏区域和第一歌词区域的过程进行说明。
图4为本公开实施例提供一种显示第一伴奏区域和第一歌词区域的过程示意图。请参见图4,包括:终端设备。其中,终端设备的显示页面包括第一页面,第一页面中包括第二区域和第一区域,第一区域中包括添加伴奏控件。在用户通过鼠标点击添加伴奏控件时,终端设备可以在第一区域生成主歌的伴奏区域,该伴奏区域中包括主歌的伴奏,并在第二区域显示歌词段落标题“主歌”,这样,可以降低音乐创作的操作复杂度,提高音频创作效率。
在该种情况中,在用户点击第一区域时,终端设备可以智能的推荐第一伴奏区域相关联的伴奏,并在第一伴奏区域显示该伴奏的音符图,在第二区域显示第一伴奏区域对应的第一歌词区域,这样,可以降低音乐创作的复杂度,提高音乐创作的效率。
情况2:响应于对第二区域的触发操作。
可选的,响应于对第二区域的触发操作,在第二区域显示第一歌词区域,并在第一区域显示第一歌词区域对应的第一伴奏区域。例如,在用户对第二区域进行点击操作时,终端设备可以在第二区域显示第一歌词区域,并在第一区域显示第一歌词区域对应的第一伴奏区域。例如,若终端设备在第二区域显示前奏歌词的区域,则终端设备在第一区域显示前奏歌词区域对应的前奏伴奏区域。
可选的,对第二区域的触发操作可以包括用户对第二区域的触控操作或语音操作等,本公开实施例对此不作限定。
下面,结合图5,对该种情况中,在第二区域中显示第一歌词区域,并在第一区域中显示第一歌词区域对应的第一伴奏区域的过程进行说明。
图5为本公开实施例提供的一种显示第一歌词区域和第一伴奏区域的示意图。请参见图5,包括终端设备。其中,终端设备的显示页面包括第一页面,第一页面中包括第一区域和第二区域,第二区域中包括添加文本控件。在用户通过鼠标点击添加文本控件时,终端设备可以在第二区域生成歌词段落标题“主歌”,并在第一区域显示主歌的伴奏区域,该伴奏区域中包括主歌的伴奏,这样,可以降低音乐创作的操作复杂度,提高音频创作效率。
在该种情况中,在用户点击第二区域时,终端设备可以在第二区域内显示第一歌词区域,并且可以在第一区域内显示第一歌词区域对应的第一伴奏区域,这样,可以降低音乐创作的复杂度,提高音乐创作的效率。
可选的,终端设备在第一区域显示第一伴奏区域,在第二区域显示第一歌词区域之后,上述音频处理方法还包括对第一伴奏区域或第一歌词区域的删除操作,可选的,终端设备可以基于如下可行的实现方式对第一伴奏区域或第一歌词区域进行删除:响应于对第一伴奏区域的删除操作,在第二区域取消显示第一伴奏区域对应的第一歌词区域,或者,响应于对第一歌词区域的删除操作,在第一区域取消显示第一歌词区域对应的第一伴奏区域。
可选的,若终端设备在第一区域中删除第一伴奏区域,则终端设备在第二区域内取消显示第一伴奏歌词区域对应的第一歌词区域。例如,第一区域中的前奏伴奏区域与第二区域内的前奏歌词区域相关联,第一区域中的主歌伴奏区域与第二区域内的主歌的歌词区域相关联,若用户在第一区域中删除前奏伴奏区域,则终端设备在第二区域中取消显示前奏歌词区域,若用户在第一区域中删除主歌伴奏区域,则终端设备在第二区域中取消显示主歌的歌词区域。
可选的,若终端设备在第二区域中删除第一歌词区域,则终端设备在第一区域内取消显示第一歌词区域对应的第一伴奏区域。例如,第二区域中的前奏歌词区域与第一区域内的前奏伴奏区域相关联,第二区域中的尾奏歌词区域与第一区域内的尾奏伴奏区域相关联,若用户在第二区域中删除前奏歌词区域,则终端设备在第一区域中取消显示前奏伴奏区域,若用户在第二区域中删除尾奏歌词区域,则终端设备在第一区域中取消显示尾奏伴奏区域。
下面,结合图6A-图6B,对删除第一歌词区域和第一伴奏区域的过程进行说明。
图6A为本公开实施例提供的一种删除第一歌词区域和第一伴奏区域的示意图。请参见图6A,包括终端设备。其中,终端设备的显示页面包括第一页面,第一页面中包括第一区域和第二区域。第一区域包括主歌的伴奏区域和副歌的伴奏区域,主歌的伴奏区域中包括主歌的伴奏,副歌的伴奏区域中包括副歌的伴奏,第二区域中包括主歌的歌词区域和副歌的歌词区域,主歌的歌词区域包括文本“主歌”,副歌的歌词区域包括文本“副歌”。
请参见图6A,在用户通过鼠标点击主歌的歌词区域,并点击删除控件对主歌的歌词区域进行删除时,第一页面的第二区域中取消显示文本“主歌”,并且,第一页面的第一区域中取消显示主歌的伴奏区域。这样,降低音乐创作的操作复杂度,提高音乐创作的效率。
图6B为本公开实施例提供的一种删除第一伴奏区域和第一歌词区域的示意图。请参见图6B,包括终端设备。其中,终端设备的显示页面包括第一页面,第一页面中包括第一区域和第二区域。第一区域包括主歌的伴奏区域和副歌的伴奏区域,主歌的伴奏区域中包括主歌的伴奏,副歌的伴奏区域中包括副歌的伴奏,第二区域中包括主歌的歌词区域和副歌的歌词区域,主歌的歌词区域包括文本“主歌”,副歌的歌词区域包括文本“副歌”。
请参见图6B,在用户通过鼠标点击主歌的伴奏区域,并点击删除控件对主歌的伴奏区域进行删除时,第一页面的第一区域中取消显示主歌的伴奏区域,并且,第一页面的第二区域中取消显示主歌的伴奏区域相关联的文本“主歌”。这样,降低音乐创作的操作复杂度,提高音乐创作的效率。
本公开实施例提供一种音频处理方法,终端设备可以显示包括第一区域和第二区域的第一页面,响应于对第一区域的触发操作,在第一区域显示第一伴奏区域,在第二区域显示第一伴奏区域对应的第一歌词区域,或者,响应于对第二区域的触发操作,在第二区域显示第一歌词区域,并在第一区域显示第一歌词区域对应的第一伴奏区域。这样,用户在任意一个区域进行编辑操作时,终端设备都可以在另一个区域显示编辑操作相关联的内容,进而降低音乐创作的复杂度,提高音乐创作的效率。
在图2所示的实施例的基础上,下面,结合图7,对上述音频处理方法中,响应于对第一区域的触发操作,在第一区域显示第一伴奏区域,并在第二区域显示第一伴奏区域对应的第一歌词区域的方法进行详细的说明。
图7为本公开实施例提供的一种显示第一伴奏区域和第一歌词区域的示意图。在图7所示的实施例中,第一区域中包括第一音轨,请参见图7,该方法流程包括:
S701、响应于对第一音轨的触控操作,在第一区域显示伴奏风格窗口。
可选的,第一区域中可以包括第一音轨。例如,第一区域中可以包括与编曲节拍相关联的第一音轨。可选的,伴奏风格窗口中包括多个伴奏风格控件。例如,伴奏风格窗口中包括伴奏风格控件A和伴奏风格控件B,每个伴奏风格控件都可以相关联一个伴奏风格。例如,伴奏风格窗口中可以包括“流行”控件、“电音”控件和“摇滚”控件,其中,“流行”控件对应的伴奏风格为流行风格,“电音”控件对应的伴奏风格为电音风格,“摇滚”控件对应的伴奏风格为摇滚风格。
可选的,在用户点击第一音轨时,第一页面中的第一区域中可以弹出包括多个伴奏风格控件的伴奏风格窗口,需要说明的是,伴奏风格窗口可以在第一区域中,也可以在第一页面中的其它区域,本公开实施例对此不作限定。
下面,结合图8,对显示伴奏风格窗口的过程进行说明。
图8为本公开实施例提供的一种显示伴奏风格窗口的过程示意图。请参见图8,包括终端设备。其中,终端设备的显示页面包括第一页面,第一页面中包括第一区域和第二区域,第一区域中包括第一音轨。在用户通过鼠标点击第一音轨时,第一区域的右侧弹出伴奏风格窗口,其中,伴奏风格窗口中包括摇滚控件、民谣控件、古典控件和流行控件。
S702、响应于对伴奏风格控件的触控操作,确定目标伴奏风格。
可选的,目标伴奏风格为第一伴奏区域相关联的伴奏的风格。例如,伴奏风格窗口中包括伴奏风格A的控件和伴奏风格B的控件,在用户点击伴奏风格A的控件时,终端设备确定目标伴奏风格为伴奏风格A,第一伴奏区域相关联的伴奏的风格为伴奏风格A,在用户点击伴奏风格B的控件时,终端设备确定目标伴奏风格为伴奏风格B,第一伴奏区域相关联的伴奏的风格为伴奏风格B。
可选的,终端设备可以基于目标伴奏风格智能生成第一伴奏区域相关联的伴奏。例如,若用户点击伴奏风格窗口中的摇滚风格控件,则终端设备生成的第一伴奏区域相关联的伴奏的风格为摇滚风格,若用户点击电音风格控件,则终端设备生成的第一伴奏区域相关联的伴奏的风格为电音风格的第一伴奏。
下面,结合图9,对确定目标伴奏风格的过程进行说明。
图9为本公开实施例提供的一种确定目标伴奏风格的过程示意图。请参见图9,包括终端设备。其中,终端设备的显示页面包括第一页面,第一页面中包括第一区域和第二区域。第一区域包括第一音轨,第一区域的右侧弹出伴奏风格窗口。伴奏风格窗口中包括摇滚控件、民谣控件、古典控件和流行控件。在用户点击摇滚控件时,终端设备可以确定目标伴奏风格为摇滚风格。
S703、响应于对第一音轨的触控操作,在第一音轨上显示第一伴奏区域。
可选的,第一伴奏区包括目标伴奏风格的伴奏。例如,在终端设备确定目标伴奏风格之后,响应于对第一音轨的触控操作,终端设备可以在第一音轨显示第一伴奏区域相关联的伴奏的音符图,其中,该音符图指示的伴奏风格为目标伴奏风格。
可选的,响应于对第一音轨的触控操作,终端设备可以基于如下可行的实现方式,在第一音轨上显示第一伴奏区域:响应于对第一音轨的触控操作,显示伴奏添加窗口。可选的,伴奏添加窗口包括伴奏段落控件,伴奏段落为一段伴奏在整首伴奏中的位置。例如,伴奏段落可以包括前奏、主歌、副歌和尾奏等段落,伴奏添加窗口中可以包括前奏控件、主歌控件、副歌控件和尾奏控件等。例如,在用户点击第一音轨时,终端设备可以显示伴奏添加窗口。
可选的,响应于对伴奏段落控件的触控操作,在第一音轨上显示第一伴奏区域。可选的,第一伴奏区域相关联的伴奏的段落与伴奏段落控件对应的伴奏段落相同。例如,伴奏段落窗口中包括前奏控件和主歌控件,在用户点击前奏控件时,终端设备生成的第一伴奏区域为前奏的伴奏区域,并在前奏的伴奏区域中显示前奏的伴奏,在用户点击主歌控件时,终端设备生成的第一伴奏区域为主歌的伴奏区域,并在主歌的伴奏区域中显示主歌的伴奏。例如,在目标伴奏风格为摇滚时,若用户点击主歌控件,则终端设备可以在第一区域的第一音轨上显示主歌的伴奏区域,主歌的伴奏区域中包括主歌的伴奏,若用户点击副歌控件,则终端设备可以在第一区域的第一音轨上显示副歌的伴奏区域,副歌的伴奏区域中包括副歌的伴奏。
可选的,第一伴奏区域还包括伴奏显示控件,伴奏显示区域中包括第一伴奏区域相关联的伴奏对应的振幅波形。例如,第一伴奏区域通过伴奏显示区域显示第一伴奏区域相关联的伴奏。例如,伴奏显示区域中可以包括第一伴奏区域相关联的伴奏对应的音符图、频谱图等。可选的,伴奏显示区域和第一伴奏区域的尺寸可以相同,也可以不同,本公开实施例对此不作限定。
响应于对伴奏显示区域的触控操作,调整伴奏显示区域的尺寸,并调整振幅波形。例如,终端设备可以响应于对伴奏显示区域边缘的滑动操作,调整伴奏显示区域的尺寸,需要说明的是,在对伴奏显示区域的尺寸调整时,伴奏显示控件内的振幅波形也会改变。
下面,结合图10,对显示第一伴奏区域的过程进行说明。
图10为本公开实施例提供的一种显示第一伴奏区域的过程示意图。请参见图10,包括终端设备。其中,终端设备的显示页面包括第一页面,第一页面包括第一区域和第二区域,第一区域中包括第一音轨。在用户点击第一音轨时,第一区域的右侧弹出伴奏风格窗口,伴奏风格窗口中包括摇滚控件、民谣控件、古典控件和流行控件。
请参见图10,在用户点击摇滚控件时,终端设备确定目标风格控件为摇滚风格。在用户再次点击第一区域内的第一音轨时,第一区域可以弹出伴奏添加窗口,其中,伴奏添加窗口中包括主歌控件和前奏控件,在用户点击主歌控件时,终端设备可以在第一区域内显示主歌 的伴奏区域,主歌的伴奏区域中包括主歌的伴奏对应的音频显示区域,音频显示区域中包括摇滚风格的主歌的伴奏的振幅波形。
请参见图10,在用户向右移动主歌的伴奏对应的音频显示区域时,主歌伴奏对应的音频显示区域在第一音轨上的长度增加(即,主歌的伴奏在音乐创作中占时增长,第一音轨与播放进度对应,主歌的伴奏区域也会增长),并且,由于主歌的伴奏为一个整体,在主歌增长时,整段伴奏都会发生改变,因此,音频显示区域中的振幅波形也会改变。这样可以提高音频创作的灵活度和音频创作的效率。
需要说明的是,图10中的第一音轨中只示出主歌的伴奏区域,若第一音轨上还包括副歌伴奏的音频显示区域和前奏伴奏的音频显示区域时,若调整任意一个音频显示区域的尺寸,则每个音频显示区域中的振幅波形都会发生改变。
S704、在第二区域显示第一伴奏区域对应的第一歌词区域。
可选的,终端设备在第一区域内显示第一伴奏区域之后,终端设备可以在第二区域显示第一伴奏区域对应的第一歌词区域。例如,若终端设备在第一区域内显示前奏的伴奏区域,则终端设备可以在第二区域显示前奏的歌词区域;若终端设备在第一区域内显示主歌的伴奏区域,则终端设备可以在第二区域显示主歌的歌词区域。
本公开实施例提供一种显示第一伴奏区域和第一歌词区域的方法,响应于对第一音轨的触控操作,在第一区域显示伴奏风格窗口,响应于对伴奏风格窗口中的伴奏风格控件的触控操作,确定目标伴奏风格,响应于对第一音轨的触控操作,在第一音轨上显示第一伴奏区域,并在第二区域显示第一伴奏区域对应的第一歌词区域。这样,终端设备可以显示第一伴奏区域,并生成第一伴奏区域相关联的伴奏,并且用户在第一区域添加第一伴奏区域之后,终端设备可以在第二区域显示第一伴奏区域对应的第一歌词区域,进而降低音乐创作的复杂度,提高音乐创作的效率。
在上述任意一个实施例的基础上,下面,结合图11,对上述音频处理方法中,响应于对第二区域的触发操作,在第二区域显示第一歌词区域,并在第一区域显示第一歌词对应的第一伴奏区域的方法进行详细的说明。
图11为本公开实施例提供的一种显示第一歌词区域和第一伴奏区域的示意图。在图11所示的实施例中,第一歌词区域包括歌词段落标题和歌词,请参见图11,该方法流程包括:
S1101、响应于对第二区域的触控操作,在第二区域显示歌词段落窗口。
可选的,第二区域中可以包括第一控件,在用户点击第一控件时,第二区域可以显示歌词段落窗口。可选的,用户可以向终端设备输入歌词段落生成的语音信息(如,生成前奏标题的语音信息),终端设备根据该语音信息,在第二区域生成对应的歌词段落标题。
可选的,歌词段落窗口包括歌词段落控件。例如,歌词段落窗口包括歌词段落控件A和歌词段落控件B,每个歌词段落控件都可以相关联一个歌词段落的标题。例如,歌词段落窗口中可以包括“前奏”控件、“主歌”控件和“副歌”控件,其中,“前奏”控件相关联的歌词段落标题为前奏,“主歌”控件相关联的歌词段落标题为主歌,“副歌”控件相关联的歌词段落标题为副歌。
下面,结合图12,对显示歌词段落窗口的过程进行说明。
图12为本公开实施例提供的一种显示文本标题窗口的过程示意图。请参见图12,包括:终端设备。其中,终端设备的显示页面包括第一页面,第一页面中包括第一区域和第二区域。第二区域中包括第一控件。在用户点击第一控件时,第二区域显示歌词段落窗口,其中,歌词段落窗口中包括前奏段落的控件和主歌段落的控件。
需要说明的是,第二区域内可以包括多个第一控件,本公开实施例对此不作限定,在终端设备显示第二区域时,终端设备也可以按照乐理,在第二区域内显示多个歌词段落标题(如,前奏、主歌、副歌和尾奏等),这样便于用户进行音乐创作,提高音乐创作的效率。
S1102、响应于对歌词段落控件的触控操作,在第二区域显示第一歌词区域。
可选的,第一歌词区域中包括歌词段落控件相关联的歌词段落标题。例如,若用户点击主歌段落的控件,则第一歌词区域中包括主歌的标题,若用户点击前奏段落的控件,则第一歌词区域中包括前奏的标题。
下面,结合图13,对显示第一歌词区域的过程进行说明。
图13为本公开实施例提供的一种显示第一歌词区域的过程示意图。请参见图13,包括:终端设备。其中,终端设备的显示页面包括第一页面,第一页面中包括第一区域和第二区域。第二区域中包括第一控件。在用户点击第一控件时,第二区域显示歌词段落窗口,其中,歌词段落窗口中包括前奏段落的控件和主歌段落的控件。在用户点击前奏段落的控件时,终端设备确定歌词段落为前奏段落,终端设备取消显示歌词段落窗口,并在第一控件处显示歌词段落标题“前奏”,在第一区域显示前奏的伴奏区域,前奏的伴奏区域中包括前奏的伴奏。
S1103、在第一区域显示第一歌词区域对应的第一伴奏区域。
可选的,终端设备在第二区域显示第一歌词区域之后,终端设备可以在第一区域显示第一歌词区域对应的第一伴奏区域。例如,若终端设备在第二区域内显示的第一歌词区域为前奏的歌词区域,则终端设备可以在第一区域显示前奏的伴奏区域;若终端设备在第二区域内显示的第一歌词区域为主歌的歌词区域,则终端设备可以在第一区域显示主歌的伴奏区域。
需要说明的是,在终端设备显示第一歌词区域对应的第一伴奏区域时,若终端设备已经确定用户选择的目标伴奏风格,则终端设备可以在第一区域显示的第一伴奏区域中包括目标伴奏风格的伴奏,若终端设备未确定目标伴奏风格,则终端设备可以显示伴奏风格窗口,在用户确定目标伴奏风格时,在第一伴奏区域中显示目标伴奏风格的伴奏,终端设备确定目标伴奏风格的方法可以参照图7所示的实施例,本公开实施例对此不再进行赘述。
S1104、响应于对第一歌词区域中的目标区域的编辑操作,显示歌词窗口,歌词窗口包括至少一段歌词。
可选的,第一歌词区域还包括歌词段落标题相关联的目标区域。例如,目标区域可以为歌词段落标题的下侧,目标区域也可以为歌词段落标题的右侧,本公开实施例对此不作限定。
可选的,编辑操作可以为触控操作、语音操作或文本输入操作,本公开实施例对此不作限定。例如,编辑操作可以为用户在目标区域内输入文本“下雨”,编辑操作也可以为用户对目标区域的触控操作和语音操作(如,触控操作为长按操作,语音操作为输入语音“下雨”)。
可选的,歌词窗口包括至少一段歌词。可选的,至少一段歌词与编辑操作相关联。例如,若编辑操作为输入文本“下雨”,则歌词窗口中显示的歌词为“下雨”。例如,若编辑操作为输入文本“下雨”,则终端设备可以生成与下雨相关联的歌词,并在歌词窗口中显示该歌词,这样,终端设备可以智能生成歌词,降低音乐创作的复杂度。
S1105、响应于至少一段歌词中的目标歌词的触控操作,在目标区域显示目标歌词。
可选的,终端设备在第二区域内显示歌词窗口之后,响应于用户对至少一段歌词中的目标歌词的触控操作,在目标区域显示目标歌词。例如,歌词窗口中包括歌词A和歌词B,若用户点击歌词A,则终端设备在目标区域内显示歌词A,若用户点击歌词B的,则终端设备在目标区域内显示歌词B。
需要说明的是,终端设备在目标区域显示歌词之后,响应于对歌词的修改操作,可以修改目标区域显示的歌词。例如,目标区域显示的歌词为“你好”,用户通过修改操作,可以将歌词“你好”修改为歌词“再见”,这样在进行音乐创作时,用户可以对终端设备推荐的智能歌词进行灵活的修改,提高音乐创作的灵活度。
需要说明的是,终端设备可以在目标区域显示编辑操作相关联的至少一段歌词,用户也可以通过终端设备直接向目标区域输入相关的歌词,本公开实施例对此不作限定。这样,若音乐创作者的创作能力较低,则终端设备可以生成编辑操作相关联的歌词,若音乐创作者的 创作能力较高,则可以直接向目标区域输入音乐创作者创作的歌词,这样,用户可以智能化、个性化的创作音乐,降低音乐创作的复杂度,提高音乐创作的效率。
下面,结合图14,对本公开实施例显示歌词的过程进行说明。
图14为本公开实施例提供的一种显示歌词的过程示意图。请参见图14,包括:终端设备。其中,终端设备的显示页面包括第一页面,第一页面中包括第一区域和第二区域。第二区域中包括第一控件。在用户点击第一控件时,第二区域显示歌词段落窗口,其中,文本标题窗口中包括前奏段落的控件和主歌段落的控件。在用户点击前奏段落的控件时,终端设备确定歌词段落标题为前奏,终端设备取消显示歌词段落窗口,并在第一控件处显示歌词段落标题“前奏”,在第一区域显示前奏的伴奏区域,前奏的伴奏区域中包括前奏的伴奏。
请参见图14,在用户点击歌词段落标题“前奏”下的目标区域,并在目标区域中输入文本“下雨”时,终端设备可以在第二区域内显示歌词窗口,歌词窗口中包括歌词“下雨天很美”和歌词“漫步在雨天”(歌词与输入的文本“下雨”相关联),在用户点击歌词“下雨天很美”时,终端设备取消显示歌词窗口,并在目标区域内显示歌词“下雨天很美”,该段歌词为前奏的歌词。在用户对下雨天很美的歌词进行触控操作时,用户可以将该段歌词修改为歌词“下雨天很凉”,目标区域显示歌词“下雨天很凉”,这样,在用户输入歌词的关键内容时,终端设备可以基于该关键内容,向用户推荐适应于伴奏风格的歌词,进而降低音乐创作的复杂度,提高音乐创作的效率。
本公开实施例提供一种显示第一歌词区域和第一伴奏区域的方法,响应于对第二区域的触控操作,在第二区域显示歌词段落窗口,响应于对歌词段落窗口中的歌词段落控件的触控操作,在第二区域显示第一歌词区域,并在第一区域显示第一歌词区域对应的第一伴奏区域,响应于对第一歌词区域中的目标区域的编辑操作,显示歌词窗口,响应于对歌词窗口中的至少一段歌词中的目标歌词的触控操作,在目标区域显示目标歌词。这样,用户在第二区域添加第一歌词区域时,终端设备可以在第二区域显示第一歌词区域对应的第一伴奏区域,降低音乐创作的复杂度,并且,响应于用于对第一歌词区域的编辑操作,终端设备可以自动生成歌词,提高音乐创作的效率。
在上述任意一个实施例的基础上,在第一区域显示第一伴奏区域,在第二区域显示第一歌词区域之后,上述音频处理方法还包括显示用户输入的第一语音的方法,下面,结合图15对显示第一语音的方法进行详细说明。
图15为本公开实施例提供的一种显示第一语音的方法示意图。在图15所示的实施例中,第一区域中还包括第二音轨,请参见图15,该方法流程包括:
S1501、响应于对第二音轨的触控操作,显示音效窗口。
可选的,音效窗口中包括音效控件。例如,音效窗口中可以包括混响控件和电音控件等。可选的,第二音轨用于显示用户输入的语音。例如,在用户向终端设备输入一段语音时,第二音轨可以显示该段语音对应的频谱图或音符图。可选的,响应于对第二音轨的触控操作,终端设备可以在第一页面中显示音效窗口。例如,在用户点击第二音轨时,终端设备可以在第一区域显示音效窗口,终端设备可以在第二区域显示音效窗口,终端设备可以在第一页面的其它区域显示音效窗口,本公开实施例对此不作限定。
下面,结合图16,对显示音效窗口的过程进行说明。
图16为本公开实施例提供的一种显示音效窗口的过程示意图。请参见图16,包括:终端设备。其中,终端设备的显示页面包括第一页面,第一页面中包括第一区域和第二区域。第二区域包括歌词段落标题“前奏”和歌词“下雨天很凉”。第一区域包括第一音轨和第二音轨,第一音轨上包括前奏的伴奏区域,前奏的伴奏区域中包括前奏的伴奏。
请参见图16,在用户点击第二音轨时,第一区域的右侧可以弹出音效窗口。其中,音效窗口中包括电音控件、均衡控件、混音控件,电音控件可以将用户输入的语音的音色修改为 电音的音色,均衡控件可以将用户输入的语音的音色修改为均衡的音色,混音控件可以将用户输入的语音的音色修改为混音的音色,这样,终端设备中包括多个音乐创作功能,用户可以个性化、多样化的进行音乐创作,提高用户的体验,提高音乐创作的效率。
S1502、响应于对音效控件的触控操作,确定目标音效。
可选的,音效窗口中包括至少一个音效控件,在用户点击音效控件时,终端设备可以确定目标音效。例如,音效窗口中包括混音控件和电音控件,若用户点击混音控件,则目标音效为混音,若用户点击电音控件,则目标音效为电音。
可选的,响应于对音效控件的触控操作之后,终端设备可以在第一区域显示音轨添加控件,响应于对音轨添加控件的触控操作,在第一区域显示与第二音轨相关联的音轨。例如,在用户点击音效窗口中的音效控件之后,终端设备可以在第二音轨的下侧区域显示音轨添加控件,在用户点击音轨添加控件时,终端设备可以在第二音轨的下侧区域显示另一个音轨,该音轨的功能与第二音轨相同,在使用该音轨显示用户输入的语音时,可以重新选择音效,也可以使用与第二音轨相同的音效,本公开实施例对此不作限定。
下面,结合图17,对添加第二音轨相关联的音轨的过程进行说明。
图17为本公开实施例提供的一种添加第二音轨相关联的音轨示意图。请参见图17,包括:终端设备。其中,终端设备的显示页面包括第一页面,第一页面中包括第一区域和第二区域。第二区域包括歌词段落标题“前奏”和歌词“下雨天很凉”。第一区域包括第一音轨、第二音轨和音效窗口,第一音轨上包括前奏的伴奏区域,前奏的伴奏区域中包括前奏的伴奏,音效窗口包括电音控件、均衡控件和混音控件。
请参见图17,在用户点击电音控件时,终端设备取消显示音效窗口,并确定第二音轨的音效为电音的音效。终端设备在第二音轨下侧显示音轨添加控件,在用户点击音轨添加控件时,终端设备可以显示音轨A,其中,音轨A的功能与第二音轨的功能相同。这样,在用户进行音乐创作时,可以创建多个不同音效的音轨,进而提高音乐创作的灵活度。
S1503、响应于用户输入的语音操作,在第二音轨显示与语音操作相关联的第一语音。
可选的,语音触发操作可以为用户输入的语音。例如,在第一区域显示第一伴奏区域,第二区域显示第一歌词区域之后,用户可以根据第一伴奏区域中的伴奏和第一歌词区域中的歌词进行演唱,终端设备可以获取用户演唱的内容,并在第二音轨中显示用户的语音对应的音符图。
可选的,第一语音中的音色相关联的音效为目标音效。例如,若第二音轨的目标音效为电音,则用户演唱的音乐中的音色为电音的音色,若第二音轨的目标音效为混音,则用户演唱的音乐中的音色为混音的音色。可选的,终端设备可以在与第二音轨相关联的音轨中显示与第一语音的音效不同的其它语音,这样可以提高音频编辑的灵活度。
本公开实施例提供一种显示第一语音的方法,响应于对第二音轨的触控操作,显示音效窗口,响应于对所述音效窗口中的音效控件的触控操作,确定目标音效,响应于用于输入的语音操作,在第二音轨显示与语音触发操作相关联的第一语音。这样,在终端设备确定伴奏和歌词之后,终端设备可以在第一区域显示用户歌唱的内容,进而提高音乐创作的效果。
图18为本公开实施例提供的一种音频处理装置的结构示意图。请参见图18,该音频处理装置180包括显示模块181和响应模块182,其中:
所述显示模块181用于,显示第一页面,所述第一页面包括第一区域和第二区域,所述第一区域与音频编辑相关联,所述第二区域与文本编辑相关联;
所述响应模块182用于,响应于对所述第一区域或所述第二区域的编辑操作,在所述第一区域显示第一伴奏区域,并在所述第二区域显示第一歌词区域。
根据本公开一个或多个实施例,所述响应模块182具体用于:
响应于对所述第一区域的触发操作,在所述第一区域显示所述第一伴奏区域,并在所述第二区域显示所述第一伴奏区域对应的第一歌词区域;
或者,
响应于对所述第二区域的触发操作,在所述第二区域显示所述第一歌词区域,并在所述第一区域显示所述第一歌词区域对应的第一伴奏区域。
根据本公开一个或多个实施例,所述响应模块182具体用于:
响应于对所述第一音轨的触控操作,在所述第一区域显示伴奏风格窗口,所述伴奏风格窗口中包括多个伴奏风格控件;
响应于对所述伴奏风格控件的触控操作,确定目标伴奏风格;
响应于对所述第一音轨的触控操作,在所述第一音轨上显示所述第一伴奏区域,所述第一伴奏区域包括目标伴奏风格的伴奏。
根据本公开一个或多个实施例,所述响应模块182具体用于:
响应于对所述第一音轨的触控操作,显示伴奏添加窗口,所述伴奏添加窗口包括伴奏段落控件,所述伴奏段落为一段伴奏在整首伴奏中的位置;
响应于对所述伴奏段落控件的触控操作,在所述第一音轨上显示所述第一伴奏区域,所述第一伴奏区域相关联的伴奏的段落与所述伴奏段落控件对应的伴奏段落相同。
根据本公开一个或多个实施例,所述第一伴奏区域还包括伴奏显示区域,所述伴奏显示区域中包括所述第一伴奏区域相关联的伴奏对应的振幅波形。
根据本公开一个或多个实施例,所述响应模块182具体用于:
响应于对所述伴奏显示区域的触控操作,调整所述伴奏显示区域的尺寸,并调整所述振幅波形。
根据本公开一个或多个实施例,所述响应模块182具体用于:
响应于对所述第二区域的触控操作,在所述第二区域显示歌词段落窗口,所述歌词段落窗口包括歌词段落控件;
响应于对所述歌词段落控件的触控操作,在所述第二区域显示所述第一歌词区域,所述第一歌词区域中包括所述歌词段落控件相关联的歌词段落标题。
根据本公开一个或多个实施例,所述响应模块182具体用于:
响应于对第一歌词区域中的所述目标区域的编辑操作,显示歌词窗口,所述歌词窗口包括至少一段歌词,所述至少一段歌词与所述编辑操作相关联;
响应于对所述至少一段歌词中的目标歌词的触控操作,在所述目标区域显示所述目标歌词。
根据本公开一个或多个实施例,所述响应模块182具体用于:
响应于对所述第一伴奏区域的删除操作,在所述第二区域取消显示所述第一伴奏区域相关联的第一歌词区域;或者,
响应于对所述第一歌词区域的删除操作,在所述第一区域取消显示所述第一歌词区域对应的第一伴奏区域。
根据本公开一个或多个实施例,所述响应模块182具体用于:
响应于对所述第二音轨的触控操作,显示音效窗口,所述音效窗口中包括音效控件;
响应于对所述音效控件的触控操作,确定目标音效;
响应于用户输入的语音操作,在所述第二音轨显示与所述语音操作相关联的第一语音,所述第一语音中的音色相关联的音效为所述目标音效。
本公开实施例提供的音频处理装置,可用于执行上述方法实施例的技术方案,其实现原理和技术效果类似,本实施例此处不再赘述。
图19为本公开实施例提供的另一种音频处理装置的结构示意图。请参见图19,该音频处理装置180还包括添加模块183,所述添加模块183用于:
在所述第一区域显示音轨添加控件;
响应于对所述音轨添加控件的触控操作,在所述第一区域显示与所述第二音轨相关联的音轨。
本公开实施例提供的音频处理装置,可用于执行上述方法实施例的技术方案,其实现原理和技术效果类似,本实施例此处不再赘述。
本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质中存储有计算机执行指令,当处理器执行该计算机执行指令时,使得该处理器执行如上述各个方法实施例所述方法。
本公开实施例还提供一种计算机程序,该计算机程序被处理器执行时实现如上述各个方法实施例所述方法。
本公开实施例还提供一种计算机程序产品,包括计算机程序,该计算机程序被处理器执行时实现如上述各个方法实施例所述方法。
本公开提供一种音频处理方法、装置及终端设备,终端设备可以显示第一页面,第一页面包括第一区域和第二区域,其中,第一区域与音频编辑相关联,第二区域与文本编辑相关联,响应于对第一区域或第二区域的编辑操作,在第一区域显示第一伴奏区域,并在第二区域显示第一歌词区域。在上述方法中,在用户对第一区域进行编辑操作时,终端设备可以在第一区域显示第一伴奏区域,并且在第二区域显示第一伴奏区域相关联的第一歌词区域,在用户对第二区域进行编辑操作时,终端设备可以在第二区域显示第一歌词区域,并在第一区域显示第一歌词区域对应的第一伴奏区域,因此,用户在任意一个区域进行编辑操作,终端设备都可以在另一个区域显示相关联的内容,进而降低音乐创作时的操作复杂度,进而降低音乐创作的复杂度,提高音乐创作的效率。
图20为本公开实施例提供的一种终端设备的结构示意图。请参见图20,其示出了适于用来实现本公开实施例的终端设备2000的结构示意图,该终端设备2000可以为终端设备或服务器。其中,终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、个人数字助理(Personal Digital Assistant,简称PDA)、平板电脑(PAD)、便携式多媒体播放器(Portable Media Player,简称PMP)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图20示出的终端设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图20所示,终端设备2000可以包括处理装置(例如中央处理器、图形处理器等)2001,其可以根据存储在只读存储器(Read Only Memory,简称ROM)2002中的程序或者从存储装置2008加载到随机访问存储器(Random Access Memory,简称RAM)2003中的程序而执行各种适当的动作和处理。在RAM 2003中,还存储有终端设备2000操作所需的各种程序和数据。处理装置2001、ROM 2002以及RAM 2003通过总线2004彼此相连。输入/输出(Input/Output,简称I/O)接口2005也连接至总线2004。
通常,以下装置可以连接至I/O接口2005:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置2006;包括例如液晶显示器(Liquid Crystal Display,简称LCD)、扬声器、振动器等的输出装置2007;包括例如磁带、硬盘等的存储装置2008;以及通信装置2009。通信装置2009可以允许终端设备2000与其他设备进行无线或有线通信以交换数据。虽然图20示出了具有各种装置的终端设备2000,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计 算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置2009从网络上被下载和安装,或者从存储装置2008被安装,或者从ROM 2002被安装。在该计算机程序被处理装置2001执行时,执行本公开实施例的方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(Erasable Programmable Read Only Memory,简称EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(Compact Disc Read Only Memory,简称CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(Radio Frequency,射频)等等,或者上述的任意合适的组合。
上述计算机可读介质可以是上述终端设备中所包含的;也可以是单独存在,而未装配入该终端设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该终端设备执行时,使得该终端设备执行上述实施例所示的方法。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(Local Area Network,简称LAN)或广域网(Wide Area Network,简称WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定,例如,第一获取单元还可以被描述为“获取至少两个网际协议地址的单元”。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(Field Programmable Gate Array,简称FPGA)、专用集成电路(Application Specific Integrated Circuit,简称ASIC)、专用标准产品(Application Specific Standard Product,简称ASSP)、片上系统(System On Chip,简称SOC)、复杂可编程逻辑设备(Complex Programmable Logic Device,简称CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
可以理解的是,在使用本公开各实施例公开的技术方案之前,均应当依据相关法律法规通过恰当的方式对本公开所涉及个人信息的类型、使用范围、使用场景等告知用户并获得用户的授权。
例如,在响应于接收到用户的主动请求时,向用户发送提示信息,以明确地提示用户,其请求执行的操作将需要获取和使用到用户的个人信息。从而,使得用户可以根据提示信息来自主地选择是否向执行本公开技术方案的操作的终端设备、应用程序、服务器或存储介质等软件或硬件提供个人信息。
作为一种可选的但非限定性的实现方式,响应于接收到用户的主动请求,向用户发送提示信息的方式例如可以是弹窗的方式,弹窗中可以以文字的方式呈现提示信息。此外,弹窗中还可以承载供用户选择“同意”或者“不同意”向终端设备提供个人信息的选择控件。
可以理解的是,上述通知和获取用户授权过程仅是示意性的,不对本公开的实现方式构成限定,其它满足相关法律法规的方式也可应用于本公开的实现方式中。
可以理解的是,本技术方案所涉及的数据(包括但不限于数据本身、数据的获取或使用)应当遵循相应法律法规及相关规定的要求。数据可以包括信息、参数和消息等,如切流指示信息。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。

Claims (16)

  1. 一种音频处理方法,包括:
    显示第一页面,所述第一页面包括第一区域和第二区域,所述第一区域与音频编辑相关联,所述第二区域与文本编辑相关联;
    响应于对所述第一区域或所述第二区域的编辑操作,在所述第一区域显示第一伴奏区域,并在所述第二区域显示第一歌词区域。
  2. 根据权利要求1所述的方法,其中,所述响应于对所述第一区域或所述第二区域的编辑操作,在所述第一区域显示第一伴奏区域,在所述第二区域显示第一歌词区域,包括:
    响应于对所述第一区域的触发操作,在所述第一区域显示所述第一伴奏区域,并在所述第二区域显示所述第一伴奏区域对应的第一歌词区域;
    或者,
    响应于对所述第二区域的触发操作,在所述第二区域显示所述第一歌词区域,并在所述第一区域显示所述第一歌词区域对应的第一伴奏区域。
  3. 根据权利要求2所述的方法,其中,所述第一区域包括第一音轨;所述响应于对所述第一区域的触发操作,在所述第一区域显示所述第一伴奏区域,包括:
    响应于对所述第一音轨的触控操作,在所述第一区域显示伴奏风格窗口,所述伴奏风格窗口中包括多个伴奏风格控件;
    响应于对所述伴奏风格控件的触控操作,确定目标伴奏风格;
    响应于对所述第一音轨的触控操作,在所述第一音轨上显示所述第一伴奏区域,所述第一伴奏区域包括目标伴奏风格的伴奏。
  4. 根据权利要求3所述的方法,其中,所述响应于对所述第一音轨的触控操作,在所述第一音轨上显示所述第一伴奏区域,包括:
    响应于对所述第一音轨的触控操作,显示伴奏添加窗口,所述伴奏添加窗口包括伴奏段落控件,所述伴奏段落为一段伴奏在整首伴奏中的位置;
    响应于对所述伴奏段落控件的触控操作,在所述第一音轨上显示所述第一伴奏区域,所述第一伴奏区域相关联的伴奏的段落与所述伴奏段落控件对应的伴奏段落相同。
  5. 根据权利要求3或4所述的方法,其中,所述第一伴奏区域还包括伴奏显示区域,所述伴奏显示区域中包括所述第一伴奏区域相关联的伴奏对应的振幅波形。
  6. 根据权利要求5所述的方法,所述方法还包括:
    响应于对所述伴奏显示区域的触控操作,调整所述伴奏显示区域的尺寸,并调整所述振幅波形。
  7. 根据权利要求2所述的方法,其中,所述响应于对所述第二区域的触发操作,在所述第二区域显示所述第一歌词区域,包括:
    响应于对所述第二区域的触控操作,在所述第二区域显示歌词段落窗口,所述歌词段落窗口包括歌词段落控件;
    响应于对所述歌词段落控件的触控操作,在所述第二区域显示所述第一歌词区域,所述第一歌词区域中包括所述歌词段落控件相关联的歌词段落标题。
  8. 根据权利要求7所述的方法,所述第一歌词区域还包括歌词段落标题相关联的目标区域;所述在所述第二区域显示所述第一歌词区域之后,所述方法还包括:
    响应于对第一歌词区域中的所述目标区域的编辑操作,显示歌词窗口,所述歌词窗口包括至少一段歌词,所述至少一段歌词与所述编辑操作相关联;
    响应于对所述至少一段歌词中的目标歌词的触控操作,在所述目标区域显示所述目标歌词。
  9. 根据权利要求1-8任一项所述的方法,其中,在所述第一区域显示第一伴奏区域,在 所述第二区域显示第一歌词区域之后,所述方法还包括:
    响应于对所述第一伴奏区域的删除操作,在所述第二区域取消显示所述第一伴奏区域相关联的第一歌词区域;或者,
    响应于对所述第一歌词区域的删除操作,在所述第一区域取消显示所述第一歌词区域对应的第一伴奏区域。
  10. 根据权利要求1-9任一项所述的方法,其中,所述第一区域包括第二音轨;所述在所述第一区域显示第一伴奏区域,在所述第二区域显示第一歌词区域之后,所述方法还包括:
    响应于对所述第二音轨的触控操作,显示音效窗口,所述音效窗口中包括音效控件;
    响应于对所述音效控件的触控操作,确定目标音效;
    响应于用户输入的语音操作,在所述第二音轨显示与所述语音操作相关联的第一语音,所述第一语音中的音色相关联的音效为所述目标音效。
  11. 根据权利要求10所述的方法,其中,响应于对所述音效控件的触控操作之后,所述方法还包括:
    在所述第一区域显示音轨添加控件;
    响应于对所述音轨添加控件的触控操作,在所述第一区域显示与所述第二音轨相关联的音轨。
  12. 一种音频处理装置,包括显示模块和响应模块,其中:
    所述显示模块用于,显示第一页面,所述第一页面包括第一区域和第二区域,所述第一区域与音频编辑相关联,所述第二区域与文本编辑相关联;
    所述响应模块用于,响应于对所述第一区域或所述第二区域的编辑操作,在所述第一区域显示第一伴奏区域,并在所述第二区域显示第一歌词区域。
  13. 一种终端设备,包括:处理器和存储器;
    所述存储器存储计算机执行指令;
    所述处理器执行所述存储器存储的计算机执行指令,使得所述处理器执行如权利要求1-11任一项所述的音频处理方法。
  14. 一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如权利要求1-11任一项所述的音频处理方法。
  15. 一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时实现如权利要求1-11任一项所述的音频处理方法。
  16. 一种计算机程序,所述计算机程序被处理器执行时实现权利要求1-11任一项所述的音频处理方法。
PCT/CN2023/113811 2022-10-20 2023-08-18 音频处理方法、装置及终端设备 WO2024082802A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211289254.0 2022-10-20
CN202211289254.0A CN117953835A (zh) 2022-10-20 2022-10-20 音频处理方法、装置及终端设备

Publications (1)

Publication Number Publication Date
WO2024082802A1 true WO2024082802A1 (zh) 2024-04-25

Family

ID=90736943

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/113811 WO2024082802A1 (zh) 2022-10-20 2023-08-18 音频处理方法、装置及终端设备

Country Status (2)

Country Link
CN (1) CN117953835A (zh)
WO (1) WO2024082802A1 (zh)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005202425A (ja) * 2005-02-21 2005-07-28 Daiichikosho Co Ltd 楽曲の伴奏音と歌詞字幕映像を同期出力する装置
KR20190009909A (ko) * 2017-07-20 2019-01-30 니나노 주식회사 콘텐츠 싱크 생성 방법, 그 장치 및 이를 위한 인터페이스 모듈
CN111899706A (zh) * 2020-07-30 2020-11-06 广州酷狗计算机科技有限公司 音频制作方法、装置、设备及存储介质
CN113539216A (zh) * 2021-06-29 2021-10-22 广州酷狗计算机科技有限公司 旋律创作导航方法及其装置、设备、介质、产品
CN113611267A (zh) * 2021-08-17 2021-11-05 网易(杭州)网络有限公司 词曲处理方法、装置、计算机可读存储介质及计算机设备
CN113961742A (zh) * 2021-10-27 2022-01-21 广州博冠信息科技有限公司 一种数据处理方法、装置、存储介质及计算机系统
CN114495873A (zh) * 2022-02-11 2022-05-13 广州酷狗计算机科技有限公司 歌曲改编方法及其装置、设备、介质、产品
CN115065840A (zh) * 2022-06-07 2022-09-16 北京达佳互联信息技术有限公司 一种信息处理方法、装置、电子设备及存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005202425A (ja) * 2005-02-21 2005-07-28 Daiichikosho Co Ltd 楽曲の伴奏音と歌詞字幕映像を同期出力する装置
KR20190009909A (ko) * 2017-07-20 2019-01-30 니나노 주식회사 콘텐츠 싱크 생성 방법, 그 장치 및 이를 위한 인터페이스 모듈
CN111899706A (zh) * 2020-07-30 2020-11-06 广州酷狗计算机科技有限公司 音频制作方法、装置、设备及存储介质
CN113539216A (zh) * 2021-06-29 2021-10-22 广州酷狗计算机科技有限公司 旋律创作导航方法及其装置、设备、介质、产品
CN113611267A (zh) * 2021-08-17 2021-11-05 网易(杭州)网络有限公司 词曲处理方法、装置、计算机可读存储介质及计算机设备
CN113961742A (zh) * 2021-10-27 2022-01-21 广州博冠信息科技有限公司 一种数据处理方法、装置、存储介质及计算机系统
CN114495873A (zh) * 2022-02-11 2022-05-13 广州酷狗计算机科技有限公司 歌曲改编方法及其装置、设备、介质、产品
CN115065840A (zh) * 2022-06-07 2022-09-16 北京达佳互联信息技术有限公司 一种信息处理方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN117953835A (zh) 2024-04-30

Similar Documents

Publication Publication Date Title
WO2020113733A1 (zh) 动画生成方法、装置、电子设备及计算机可读存储介质
US20140059471A1 (en) Scrolling Virtual Music Keyboard
US20130295961A1 (en) Method and apparatus for generating media based on media elements from multiple locations
WO2022253157A1 (zh) 音频分享方法、装置、设备及介质
US11934632B2 (en) Music playing method and apparatus
CN110324718A (zh) 音视频生成方法、装置、电子设备及可读介质
WO2020224294A1 (zh) 用于处理信息的系统、方法和装置
US20190103084A1 (en) Singing voice edit assistant method and singing voice edit assistant device
US20200413003A1 (en) Method and device for processing multimedia information, electronic equipment and computer-readable storage medium
WO2024099350A1 (zh) 直播处理方法、装置和电子设备
US20240054157A1 (en) Song recommendation method and apparatus, electronic device, and storage medium
WO2024099348A1 (zh) 音频特效的编辑方法、装置、设备及存储介质
US20240103802A1 (en) Method, apparatus, device and medium for multimedia processing
WO2024099275A1 (zh) 媒体内容处理方法、装置、设备、可读存储介质及产品
WO2024082802A1 (zh) 音频处理方法、装置及终端设备
WO2024016901A1 (zh) 基于歌词的信息提示方法、装置、设备、介质及产品
WO2023174073A1 (zh) 视频生成方法、装置、设备、存储介质和程序产品
JP5375868B2 (ja) 再生方法切替装置、再生方法切替方法及びプログラム
WO2024066790A1 (zh) 音频处理方法、装置及电子设备
WO2024012257A1 (zh) 音频处理方法、装置及电子设备
Meikle Examining the effects of experimental/academic electroacoustic and popular electronic musics on the evolution and development of human–computer interaction in music
CN110164481A (zh) 一种歌曲录制方法、装置、设备及存储介质
KR20060079094A (ko) 휴대용 음악 편집기를 갖는 음악 작곡 시스템 및 이를이용한 온-라인 노래방 시스템의 운영 방법
WO2024104181A1 (zh) 确定音频的方法、装置、电子设备及存储介质
WO2023160713A1 (zh) 音乐生成方法、装置、设备、存储介质及程序