CN113535289A - Method and device for page presentation, mobile terminal interaction and audio editing - Google Patents

Method and device for page presentation, mobile terminal interaction and audio editing Download PDF

Info

Publication number
CN113535289A
CN113535289A CN202010313949.2A CN202010313949A CN113535289A CN 113535289 A CN113535289 A CN 113535289A CN 202010313949 A CN202010313949 A CN 202010313949A CN 113535289 A CN113535289 A CN 113535289A
Authority
CN
China
Prior art keywords
audio
sound
trigger
area
playing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010313949.2A
Other languages
Chinese (zh)
Inventor
贾朔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Huancheng culture media Co.,Ltd.
Original Assignee
Beijing Wall Breaker Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Wall Breaker Technology Co ltd filed Critical Beijing Wall Breaker Technology Co ltd
Priority to CN202010313949.2A priority Critical patent/CN113535289A/en
Publication of CN113535289A publication Critical patent/CN113535289A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0033Recording/reproducing or transmission of music for electrophonic musical instruments

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Electrophonic Musical Instruments (AREA)

Abstract

A page presentation, mobile terminal interaction and audio editing method is disclosed. The page presentation method comprises the following steps: displaying an audio editing page comprising an audio trigger area and an audio rhythm display area; and the audio rhythm display area responds to the trigger signal of the audio trigger area for displaying. The mobile terminal can display the audio progress bar and operate the keyboard area of the piano on the same screen, so that a mobile terminal user, especially a smart phone user can freely add musical instrument combination and sound sampling in a single audio editing page, and the effect of conveniently manufacturing music works with complete music editing effect by using the mobile terminal is achieved.

Description

Method and device for page presentation, mobile terminal interaction and audio editing
Technical Field
The present disclosure relates to the field of mobile terminal interaction, and in particular, to a method and an apparatus for page rendering, mobile terminal interaction, and audio editing.
Background
With the popularization of smart phones, people increasingly use mobile phones to record audio works (e.g., singing works) and expect to freely add various sound effects to their own audio works.
Digital Audio Workstation (general Digital Audio Workstation: abbreviated as DAW) is also called host software, and is a kind of computer software integrating music composing, composing and mixing into one body and used for making music. The existing DAW is usually installed in a terminal equipped with a large screen such as a computer terminal. In a specific operation, because the progress bar and the keyboard area are not in the visual range of one screen, a plurality of interfaces are required to be switched back and forth. In addition, because the keys are numerous in key layout, the operation of directly transplanting the keys to the mobile terminal can easily cause misoperation due to small clicking position, and the keys can be created only by requiring users to have professional knowledge, so that the use threshold is higher.
Therefore, there is a need for an audio editing scheme suitable for use on the mobile side.
Disclosure of Invention
The technical problem to be solved by the present disclosure is to provide a page presentation scheme, which can display an audio progress bar and operate a keyboard region on the same screen, thereby facilitating a user to freely add musical instrument combinations and sound sampling, and achieving the effect of making music works with complete music composition effect.
According to a first aspect of the present disclosure, there is provided a page rendering method, including: displaying an audio editing page comprising an audio triggering area and an audio rhythm display area; and the audio rhythm display area responds to the trigger signal of the audio trigger area for displaying.
According to a second aspect of the present disclosure, there is provided a mobile terminal interaction method, including: acquiring input of an audio editing page playing area; rendering the input effect and playing corresponding sound; and finishing audio editing according to the input effect and the sound.
According to a third aspect of the present disclosure, there is provided an audio editing method comprising: displaying an audio editing page comprising an audio rhythm display area and an audio trigger area; acquiring triggering operation of a sound triggering key in the audio triggering area; and playing a sound effect corresponding to the sound trigger key operation, and displaying a playing display corresponding to the trigger operation in the audio rhythm display area.
According to a fourth aspect of the present disclosure, there is provided an audio and video editing method, including: displaying an audio editing page comprising an audio rhythm display area, an audio trigger area and a video display area; acquiring triggering operation of a sound triggering key in the audio triggering area; and playing a sound effect corresponding to the sound trigger key operation, and displaying playing display corresponding to the trigger operation in the audio rhythm display area. Wherein the corresponding video content is displayed within the video presentation area.
According to a fifth aspect of the present disclosure, there is provided a live broadcasting method including: acquiring and displaying an audio editing page for audio editing by a broadcaster, wherein the audio editing page comprises an audio rhythm display area and an audio trigger area; acquiring the operation of the broadcaster on the sound trigger key in the audio trigger area; and playing a sound effect corresponding to the sound trigger key operation, and displaying an advancing audio progress bar in the audio rhythm display area.
According to a sixth aspect of the present disclosure, there is provided an audio editing sharing method, including: simultaneously displaying an audio editing page including an audio rhythm display region and an audio trigger region to a plurality of users; acquiring the operation of a current operation user on a sound trigger key in the audio trigger area under the condition that an audio progress bar in the audio rhythm display area advances; and playing a sound effect corresponding to the sound trigger key operation to the plurality of users, and displaying the advancing audio progress bar in the audio rhythm display area.
According to a seventh aspect of the present disclosure, there is provided an audio producing method comprising: obtaining an audio performance work of a user; entering an audio editing page comprising an audio rhythm display area and an audio trigger area; acquiring the operation of a user on a sound trigger key in the audio trigger area under the condition that an audio progress bar in the audio rhythm display area advances, wherein the audio progress bar corresponds to the audio performance work; and superposing and playing a sound effect corresponding to sound trigger key operation on the audio performance works, and displaying the playing display corresponding to the trigger operation in the audio rhythm display area.
According to an eighth aspect of the present disclosure, there is provided a page rendering apparatus comprising: the audio editing page display unit is used for displaying an audio editing page comprising an audio rhythm display area and an audio trigger area; and the audio editing page refreshing unit is used for displaying the audio rhythm display area in response to the trigger signal of the audio trigger area.
According to a ninth aspect of the present disclosure, there is provided a mobile terminal interaction device, including: an input page acquisition unit for acquiring an input to an audio editing page playing area; the rendering and playing unit is used for rendering the input effect and playing the corresponding sound; and the audio editing unit is used for finishing audio editing according to the input effect and the sound.
According to a tenth aspect of the present disclosure, there is provided an audio editing apparatus comprising: the audio editing page display unit is used for displaying an audio editing page comprising an audio rhythm display area and an audio trigger area; the voice trigger key selection unit is used for acquiring the operation of a voice trigger key in the audio trigger area; and the playing and displaying unit is used for playing a sound effect corresponding to sound trigger key operation and displaying a playing display corresponding to the trigger operation in the audio rhythm display area, for example, displaying a corresponding sound mark in an audio progress bar.
According to an eleventh aspect of the present disclosure, there is provided a computing device comprising: a processor; and a memory having executable code stored thereon, which when executed by the processor, causes the processor to perform the method as described in the first to seventh aspects above.
According to a twelfth aspect of the disclosure, there is provided a non-transitory machine-readable storage medium having stored thereon executable code which, when executed by a processor of an electronic device, causes the processor to perform the method as described in the first to seventh aspects above.
The audio editing scheme of the invention can realize the music production effect similar to a Digital Audio Workstation (DAW) at a computer end on a mobile end by realizing the previewing of the playing and editing music score on the same screen at the mobile end and using a key layout (such as a 12-pad layout) more suitable for the operation of a mobile phone. The scheme can also generate melody prompts based on the chord music score, and the threshold of the user for creating melodies is reduced.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in greater detail exemplary embodiments thereof with reference to the attached drawings, in which like reference numerals generally represent like parts throughout.
FIG. 1 shows a schematic flow diagram of a page rendering method according to one embodiment of the invention.
FIG. 2 shows a layout diagram of an audio editing page, according to one embodiment of the invention.
Fig. 3 illustrates an operational rendering of an audio editing page according to one embodiment of the present invention.
Fig. 4 shows an example of a page rendering according to the invention.
Fig. 5 shows another example of page rendering according to the invention.
Fig. 6 shows a schematic flow chart of a mobile terminal interaction method according to an embodiment of the present invention.
FIGS. 7A-B show an effect editing example according to the present invention.
Fig. 8 shows a schematic flow chart of an audio editing method according to the invention.
FIG. 9 shows a schematic composition diagram of a page rendering apparatus according to one embodiment of the present invention.
Fig. 10 is a schematic diagram illustrating the components of a mobile-side interaction device according to an embodiment of the present invention.
Fig. 11 is a schematic diagram showing the composition of an audio editing apparatus according to an embodiment of the present invention.
Fig. 12 is a schematic structural diagram of a computing device that can be used to implement the page rendering, mobile terminal interaction, and audio editing methods described above according to an embodiment of the invention.
Detailed Description
Preferred embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
With the popularization of smart phones, people increasingly use mobile phones to record audio works (e.g., singing works) and expect to freely add various sound effects to their own audio works.
Digital Audio Workstation (general Digital Audio Workstation: abbreviated as DAW) is also called host software, and is a kind of computer software integrating music composing, composing and mixing into one body and used for making music. The existing DAW is usually installed in a terminal equipped with a large screen such as a computer terminal. In a specific operation, because the progress bar and the keyboard area are not in the visual range of one screen, a plurality of interfaces are required to be switched back and forth. In addition, because the keys are numerous in key layout, the operation of directly transplanting the keys to the mobile terminal can easily cause misoperation due to small clicking position, and the keys can be created only by requiring users to have professional knowledge, so that the use threshold is higher.
The invention provides a scheme for realizing page presentation at a mobile terminal and corresponding mobile terminal interaction and audio editing, which can display an audio progress bar and operate a keyboard area on the same screen, thereby facilitating a user to freely add musical instrument combination and sound sampling and achieving the aim of making a music work with a complete music composition effect. Further, the scheme can also realize a music production effect similar to a computer-side Digital Audio Workstation (DAW) on the mobile end by using a key layout (such as a 12-pad layout) more suitable for the operation of a mobile phone and freely-insertable and customized sound effects. In addition, the scheme can also generate melody prompts based on the chord music score, and the threshold of creating melodies by the user is lowered.
FIG. 1 shows a schematic flow diagram of a page rendering method according to one embodiment of the invention. The page presenting method is particularly suitable for being implemented on terminal equipment, such as a smart phone, through an installed APP. For example, the solution of the invention may be implemented as a dedicated audio editing APP, or as an audio editing module of a singing APP.
In step S110, an audio editing page including an audio rhythm display region and an audio trigger region is displayed. Here, the corresponding audio editing page may be entered based on the audio editing operation by the user. For example, in a singing or karaoke scene, a user may first enter a segment of audio, and click an audio editing button displayed on the touch screen to enter the audio editing page of the present invention after the entry is completed. In one embodiment, the user may load audio for editing, for example, stored locally on the handset. In another embodiment, the user can select the self-contained audio melody under the audio editing page for audio effect editing. In an extreme embodiment, the user may not load any audio and complete the input and editing of all sound elements directly in the audio editing page.
In step S120, the audio rhythm display region displays in response to a trigger signal of the audio trigger region.
In the invention, the audio trigger area and the audio rhythm display area can be simultaneously displayed in the audio operation page. The audio trigger area includes sound trigger keys for user operation, and the display of the sound trigger keys may be maintained at all times (although the display of sound trigger keys of different categories may be described below), while the audio rhythm display area may only perform corresponding display under a specific operation, for example, display an audio progress bar during playing. Since the audio progress bar is an indication reflecting the progress of the audio playing, it may be displayed while the audio is playing, and not displayed at other times.
By simultaneously displaying the audio triggering area and the audio rhythm display area in the same audio editing page, the inconvenience of operation that a plurality of interfaces need to be switched back and forth due to the fact that the progress bar and the keyboard area are not in the visual range of a screen in the conventional DAW is avoided, and the operation efficiency is improved.
Here, the display in the audio rhythm display region in response to the trigger signal of the audio trigger region may have various forms. In particular, in response to operation of a sound trigger key within the audio trigger area, the audio rhythm display area may display a corresponding sound indicia. Here, the sound mark may refer to a visually perceivable content for marking the trigger input, which needs to be synchronized with the sound output corresponding to the trigger input. For example, the user inputs a chord of a specific note height under the tone of a piano, and at the same time, the audio circuit of the moving end outputs the chord and gives a corresponding display in the audio rhythm display area. The above display, for example, the sound mark may have various forms, for example, may be a colored bubble, a dancing little person, or the like, and may preferably be a mark displayed within an audio progress bar. For example, when synchronized with the sound being played, each sound tag may be animated, in other words, when the corresponding sound it represents is played, the sound tag will appear differently than when it is not played. For example, a sound marker may be displayed when its corresponding sound is played, illuminated, or passed through a timeline, etc.
In one embodiment, an audio progress bar may be included within the audio tempo display area, which may refer to a progress bar that presents at least a portion of sound information contained within the audio in accordance with movement of the timeline. The sound trigger key displayed in the audio trigger area can add sound effect to the audio under the operation of a user, and the added sound effect can be reflected in an audio progress bar in the audio rhythm display area in a certain form, such as a sound mark.
FIG. 2 shows a layout diagram of an audio editing page, according to one embodiment of the invention. In the audio editing page shown in fig. 2, the audio rhythm display region 10 including the progress bar 11 may be located at an upper portion of the page, and the audio trigger region 20 including the sound trigger key 21 may be located at a lower portion of the page. In other embodiments, the audio tempo display region and the audio trigger region may also be located at different positions on the page, e.g. with the audio trigger region above and the audio tempo display region below.
Fig. 2 shows a page layout on a mobile terminal, in particular a smartphone. Since the smart phone is usually equipped with a touch screen, various operations can be completed by displaying virtual keys at corresponding positions on the touch screen. To this end, the operation of the sound trigger key in the audio trigger area by the user may include: and the user touches the sound trigger key in the audio trigger area displayed in the touch screen. In some embodiments, for example, for a display screen using a proximity sensor, the user's operation of the acoustic trigger key in the audio trigger area may also include: the user's approach to the sound trigger key within said audio trigger area displayed in the touch screen, e.g. hovering over the corresponding area of the screen.
Since the time is generally characterized using the horizontal direction in the conventional practice, the audio progress bar of the present invention is preferably displayed as a progress bar extending in the horizontal direction for this purpose. Accordingly, the sound marker may be a line segment having a predetermined length extending in the horizontal direction and displayed together with the time line. Here, the length of the line segment is typically used to characterize the duration of the sound effect. It can also be a broken line graph display, and the intensity and height of the broken line graph can be used to characterize the sound effect.
In the embodiment shown in fig. 2, the audio progress bar in the audio rhythm display region 10 may include: a fixed timeline 12, and a progress bar that moves over time. As shown in the figure, at the time of audio playing, the progress bar in which audio and/or sound effect information is recorded may travel from the right side of the screen toward the left side of the screen, and the contents of the progress bar currently passing through the time line 12 correspond to the contents of the audio currently being played.
Fig. 3 illustrates an operational rendering of an audio editing page according to one embodiment of the present invention. As shown in fig. 3, the progress bar 11 travels from the right side of the screen toward the left side of the screen as indicated by the gray arrow in fig. 3, along with the playback of the background audio loaded by the user. When a user clicks a certain sound trigger key (e.g. the grey-scale key 21 in the figure) in the sound trigger key area at a certain time in the audio playing process, the audio input module at the mobile terminal may play a sound effect corresponding to the sound trigger key (i.e. the sound effect generated by the key is superimposed on the background sound effect being played) through a speaker provided by the user or an earphone or a sound box connected to the user, and accordingly, a corresponding line segment 13 is generated in the audio progress bar in progress. The display of line segment 13 through timeline 12 can be considered as an audible marker in this example. To this end, step S130 may include synchronously displaying line segments of corresponding lengths in an audio progress bar within the audio rhythm display area in response to user operation of a sound trigger key within the audio trigger area. In one embodiment, the length of the line segment may correspond to the key press duration of the corresponding key press by the user. In other embodiments, a key press (one click) may correspond to a line segment of a standard (or shortest) length, and the system may recognize the user's press (as distinguished from a click) and generate an equality of the corresponding length based on the duration of the press.
Although a fixed-position time line 12 is shown in fig. 2 and 3, in other embodiments, the time line 12 may take other forms, such as a time point (for example, a small inverted triangle provided at least on the progress bar 11). Additionally or alternatively, the audio tempo display region may also take the form of a timeline or a point in time moving over time, the progress bar displaying a fixed alternative layout. The scheme is particularly suitable for preview or playback of short audio edits. At this time, the progress bar 11 may be displayed for a full length within the audio rhythm display region 10, and as the audio is played, the time line 12 slides from the left side of the screen to the right side of the screen (i.e., in the opposite direction to the gray arrow in fig. 3).
As shown in fig. 2 and 3, for ease of operation, multiple acoustic trigger keys may be displayed simultaneously within the audio trigger area. In the present invention, the "sound trigger key" may also be referred to as a "key", or as a "play key" in connection with an interaction scheme as follows. However, unlike the conventional piano keys (e.g., piano keys) displayed in a single row, the layout of the sound trigger keys of the present invention is different from the trigger layout of the original musical instrument, and particularly can be displayed as a plurality of virtual pad keys in a plurality of rows and columns, such as the illustrated 12-key pads. The percussion pad (launchpad) in the music production field is similar to an electronic synthesizer, can be matched with audio editing software to edit and control the percussion pad, and can customize the sound color and the phrase of each pad.
Compared with a key operation page simulating an actual musical instrument in the prior art, the sound trigger keys displayed in multiple rows and multiple columns can better meet the requirement of limited display area of a screen at a mobile terminal, and misoperation is avoided while the keys are convenient to press. As shown in the figure, since the key operation pages in the existing audio editing software are numerous and are arranged in only one dimension, the entire screen is often occupied, and therefore, the key operation pages and the editable audio progress bar area need to be displayed on separate pages. The progress bar and the keyboard area are not in the visual range of one screen, so that a user needs to switch a plurality of interfaces back and forth when editing the audio, and the editing efficiency is low. In addition, the movable end is limited by the size of the screen, so that the operation page simulating black and white keys of a direct piano cannot be directly transplanted.
In other embodiments, keys within the audio trigger area may have other display forms. For example, a key may be activated by multiple sounds displayed in a single column. For example, in a novice mode or the like that provides a user with an auxiliary prompt, a small number of keys, e.g., a single column of four keys, may be displayed to facilitate the user's fumbling. In addition, the system may analyze the audio to be edited and may determine the range of pitch inputs to which the audio is appropriate and thereby provide a small number of keys within the range. Whereas in the case of percussion music, only two drumbeat keys may be displayed. In addition, a plurality of sound trigger keys which are displayed in an annular mode can be further arranged; or a plurality of sound activated keys of a center-circle display. For example, in the case of percussion editing, two drum sound keys may be displayed in the middle, and other percussion dubbing sounds such as a triangle iron may be displayed around the middle.
Displaying multiple sound activation keys simultaneously within an audio activation region may be of the same sound classification. For example, a tab may be provided between the progress bar area and the voice trigger button area to facilitate the user's switching between different voice categories. Therefore, the page rendering method of the present invention may further include: and responding to the selection operation of the corresponding sound option, and switching and displaying a plurality of sound trigger keys belonging to different sound classifications.
In particular, the different sound classifications may comprise at least one of: different instrument classifications; and different sound effect classifications. For this, the plurality of sound trigger keys displayed simultaneously may each correspond to a different pitch or a different sub-sound effect. In one embodiment, the sound classification may be divided into two major categories, an instrument classification and an effect classification, and may further include a minor category of the subdivided instruments and effects under respective major category tabs. Here, the musical instrument classification may include different musical instrument sound effects such as piano, violin, 8bit (musical effect in early stage red and white machine game), and the like. In a broader definition, the musical instrument classification may include various styles of musical instrument effects, as long as each key of the effect corresponds to a particular pitch under a particular style (in other embodiments, one key may correspond to one chord, i.e., one key corresponds to one fixed pitch combination). In a broader definition, the instrument classification may include various styles of instrument effects, so long as each key of the effect corresponds to a fixed pitch combination at a particular style. In contrast, sound effect classification can simulate various scenes not played on the basis of pitch, such as a bird song (e.g., cuckoo including high-low transition), wind, rain, etc., which are themselves a kind of sound material.
Fig. 4 shows an example of a page rendering according to the invention. Fig. 5 shows another example of page rendering according to the invention. In the example of fig. 4 and 5, the instrument category is displayed as a "melody" tab and the sound effect category is displayed as a "sound effect" tab.
Under the instrument classification, a specific instrument can be selected. In the same instrument category, a plurality of sound trigger keys corresponding to different pitches of the instrument may be simultaneously displayed within the audio trigger area, as in the piano shown in fig. 4. Upon selection of the piano tab, a plurality of sound trigger keys (e.g., 12 pads as shown) in a row and column arrangement may then be displayed within the audio trigger area. The pitch represented by each key may be marked on the key for ease of operation by the user. The sound marks of different pitches may be displayed differently in the audio progress bar. For example, sound markers corresponding to different pitches may be displayed at different heights of the audio progress bar. For example, sound marks with higher vertical heights correspond to higher pitches.
When the user presses a sound trigger key within the audio trigger area, in addition to outputting the corresponding audio and displaying it within the audio rhythm display area, e.g., displaying the corresponding sound indicia in an audio progress bar, the display of the operated key may be simultaneously altered, e.g., changed in color or shape or filled in. As shown in FIG. 4, the C, E and G triple keys light up when pressed, for example. Changing from dark gray to a particular color (e.g., red) to visually indicate to the user that the three key is being pressed. The above-described press-down operation causes a new sound mark to be generated in the audio progress bar. In the example of FIG. 4, it may be specified that a single click of each key (rather than a long press) under the instrument category results in a sound effect play of a particular duration and a fixed length line segment corresponding to the particular duration. This is also the reason why C, E and the G triple key, which completed the click in FIG. 4, would form a line segment of a predetermined length to the right of the timeline. That is, the current click may result in an audio effect that is as long as one second (e.g., one beat at 60 beats a minute), and even after the click is over, the audio effect and its corresponding line segment last longer than the action itself.
As shown in FIG. 4, the C, E and G triple keys are pressed simultaneously to achieve a chording effect. And the sound mark displayed in the sound effect progress bar area is also the corresponding chord mark. Chords give a more pleasing and soothing sound effect than a single tone for a single key. The chord input described above generally requires the user to have some musical theory knowledge. In order to improve the usability of the APP, the method may further include transforming the display of one or more sound trigger keys in the audio trigger area as an operation prompt. For example, the display of multiple voice activated keys within the audio activation region may be simultaneously altered as a cue to provide chord input operations to the user.
The display transformations presented by the prompts are typically different from the transformations presented by the user's keystrokes to facilitate differentiation by the user. As also shown in FIG. 4, after the user has clicked C, E and G simultaneously (or nearly simultaneously), for example, according to a previous prompt, the display of D, G and B may be simultaneously transformed, for example, framed outside of the three keys, to suggest that the user may then click the three keys simultaneously to enter another chord. The user may click based on the prompt or may ignore the prompt altogether. In some embodiments, the system may provide a prompt for chord input to the user in a particular prompt mode, or at the user's novice date. The prompt may be a random chord prompt or a chord prompt generated after the audio is loaded and matched with the chord prompt.
In some embodiments, chords of different pitches may have different colors, and the cues may also have a display modality other than simply framing. In one embodiment, the chord hint may be a dark-toned block that is gradually filled (e.g., from top to bottom) within the key frame, and the user's actual chord click may be a light-toned block of the same hue. The progressive filling of the keyframes in the chord prompt may also suggest to the user the appropriate click timing, such as clicking near filling.
In the sound effect classification scenario, sound trigger keys corresponding to different sound effects categorized as the same kind of sound effects are included in the same sound effect classification. As shown in fig. 5, in the sound effect classification scene corresponding to the sound effect tab, sub-tabs such as "recent", "scene", "electric sound", "animal", etc. may be included. Within, for example, an "animal" tab, each sound effect may correspond to a sound effect of a sound of an animal, such as a cat cry, a dog cry, or the like. Displaying corresponding sound indicia within the audio rhythm display region in an audio effect scene may then include displaying sound indicia corresponding to different audio effects in different colors and/or shapes. In contrast to fig. 4, the sound marks corresponding to the sound effect keys in fig. 5 are preferably displayed as thicker lines since the sound effects thereof generally include more sound information than musical instruments and chords, and the lines can be distinguished from one another in different colors.
As shown in fig. 4 and 5, a "preview" button may also be directly included in the audio editing page. In other words, the user can implement a preview directly within the audio editing page that includes playback of the edited audio and corresponding progression of the prominence progress bar. For this reason, the page presentation method of the present invention may further include: and responding to the preview operation of the user, and directly playing a sound effect progress bar generated before in the audio rhythm display area, wherein the sound effect progress bar comprises a sound mark generated based on the editing before. Fig. 5 can be regarded as an example of page rendering under the preview operation. When a corresponding line segment passes through a time line or a time point while playing the previously generated sound effect progress bar, the display of the sound trigger key corresponding to the line segment in the audio trigger area may also be changed. For example, a sound effect button may be displayed reduced, highlighted, or darkened when the sound effect begins to play. In one embodiment, the clicking operation of the sound effect button may also correspond to a sound effect with a specific length, and the pressing operation of the button corresponds to a sound effect with a pressing duration. In other embodiments, the playback length of the sound effect may be completely synchronized with the key length.
In addition, as shown in fig. 4, a rectangular progress bar as indicated by a mark 11 in fig. 2 and 3 may be displayed in the progress bar region when melody editing or playback is performed. The progress bar has a certain height in the vertical direction, thereby facilitating the display of different tone pitches, e.g., chords of different tone pitches, therein at different heights. In contrast, when the sound effects (the illustrated sound effect tab) shown in fig. 5 are edited or played back, the rectangular progress bar as indicated by the mark 11 in fig. 2 and 3 may not be displayed, but the progress bar may be directly represented by the entire region as indicated by the mark 10 in fig. 2 and 3. At this time, it is possible to display a waveform of background audio (for example, in a dark color or a gray scale, not shown in the figure) in the background portion and display sound marks corresponding to respective sounds (including melody and sound effect) in the foreground.
In some embodiments, the audio editing page may further include a video presentation area, and the method further includes: displaying video content within the video presentation area. In other words, the audio editing page of the present invention may be a portion of an audiovisual editing page. In the page, only audio can be edited and the video can be played in a matching way; it is also possible to edit both video and audio and adjust the alignment relationship therebetween. To this end, displaying video content within the video presentation area may include: and synchronously displaying the video content with the audio progress bar displayed in the audio rhythm display area. The video presentation area may for example be arranged above the audio tempo display area and for example correspondingly reduce the screen area occupied by the audio trigger area. At this time, the audio trigger area, the audio rhythm display area and the video display area can be displayed at the same time so as to preview or edit the audio and video. Alternatively or additionally, displaying video content within the video presentation area may further comprise: displaying video content in the video display area in place of the audio rhythm display area or the audio trigger area. In this case, the display of the video presentation area may occupy a portion that was originally used to display the audio tempo display area or the audio trigger area, or other area of the page. In addition, the video may be displayed in the video display area as a floating window.
Besides the user directly uses the sound trigger key in the audio trigger area to perform audio editing operation, for example, to perform touch key, the sound trigger key-in operation can also be performed by establishing external mapping. For this purpose, the user performing an audio editing operation using the sound trigger key in the audio trigger area may include: establishing mapping between external input and sound trigger keys in the audio trigger area; and operating the sound trigger key by using the external input. The external input may be input from an external input device, input from other APPs on the mobile side, input from a networked user, etc. For example, the method may include establishing a mapping of an external input device to the voice activated keys; and operating the sound trigger key by using the external input device so as to perform audio editing operation. Alternatively or additionally, a mapping of external input actions and/or positions to the sound-activated keys may be established; and operating the sound trigger key by using the external input action and/or position to carry out audio editing operation.
When an external input device is used, a mapping operation of the device and a mobile terminal can be established first, so that a specific operation for the device can correspond to an operation on a sound trigger key. In one embodiment, a physical 4x3 pad may be used to map with the mobile terminal (e.g., via bluetooth), and then the user may perform the sound-triggered operation of the corresponding key by operating the physical pad. In another embodiment, a physical keyboard, such as a portion of a numeric input keypad, may be mapped with the mobile terminal (e.g., via a bluetooth or wired connection), and the user may then perform voice-activated operation of the corresponding key through operation of the numeric input keypad. In addition, the user can also input sound trigger keys through a mouse, a touch pad and even a television remote controller with visual identification for establishing mapping.
When operating with external input actions and/or positions, a mapping of external input actions and/or positions to the sound-activated keys is typically established in advance, e.g. the system knows in advance which actions, gestures or which positions correspond to which keys. Of course, it may be established manually by the user through a system learning process. After the mapping is established, the voice activated keys may then be operated using external input actions and/or locations to perform audio editing operations. For example, in the "african drum" tab, only two sound activation buttons are included to correspond to different drum sounds. In this case, the air slap motions of the left and right hands of the user may be respectively associated with a drum sound by image recognition or 3D information recognition for the user, so that the user can edit the drum sound by the air slap.
The invention has been described above with emphasis on page rendering in connection with the accompanying drawings. In a specific application, the invention can also be realized as a mobile terminal interaction method. Fig. 6 shows a schematic flow chart of a mobile terminal interaction method according to an embodiment of the invention. The method is particularly applicable to mobile terminals such as smart phones with touch screens having limited display areas. The mobile terminal has an audio input function and comprises an audio editing APP or a singing APP which can be installed and comprises the interactive function of the invention.
In step S610, an input to the audio editing page playing area is acquired. In step S620, the input effect of the input is rendered, and a corresponding sound is played. In step S630, audio editing may be completed according to the input effect and the sound. In different scenarios of the invention, "audio edits" may have different references. In the singing scene, the audio editing can perform melody and sound editing on background audio so that a user can release a final edited finished product. In a live scenario, the "audio edits" may be published in real-time, i.e., each editing operation by a user may be published simultaneously. In a collaborative scenario, the completion of the "audio edit" described above may be an "audio edit" work by a party user that requires further editing subsequently. In addition, in an audio-video editing scenario, "audio editing" may also be part of video editing, and may also include operations such as synchronization and alignment between audio and video.
As described above, it is possible to acquire an operation of entering an audio editing page input by a user and display the audio editing page including an audio rhythm display region and an audio trigger region, thereby displaying the audio editing page to the user.
Here, the operation of entering the audio editing page may be any operation input by the user that can display the audio editing page as shown in fig. 2 to 5 above. For example, a corresponding audio editing page may be entered based on a user's audio editing operation. For example, in a singing or karaoke scene, a user may first enter a segment of audio, and click an audio editing button to enter the audio editing page of the present invention after the entry is completed. In one embodiment, the user may load audio for editing, for example, stored locally on the handset. In another embodiment, the user can select the self-contained audio melody under the audio editing page for sound effect editing. In an extreme embodiment, the user may not load any audio and complete the input and editing of all sound elements directly in the audio editing page.
In response to the operation of the user in the playing area, the mobile terminal may play a corresponding sound effect and display a corresponding rendering effect in the audio rhythm display area, for example, display a corresponding sound mark in an audio progress bar in the audio rhythm display area.
As described above, the progress bar may travel from the right side of the screen to the left side of the screen along with the playing of the background audio loaded by the user, and the audio input module of the mobile terminal may play the background audio via a speaker provided therein or a device such as an earphone or a speaker connected thereto. When a user clicks a certain sound trigger key in the sound trigger key area at a certain moment in the audio playing process, the audio input module at the mobile terminal can play a sound effect corresponding to the sound trigger key (i.e., the sound effect generated by the key is superposed with the background sound effect being played) through a self-contained speaker or an earphone or a sound box connected with the self-contained speaker, and correspondingly, a corresponding sound mark is generated in the audio progress bar in progress.
When the user edits the background audio, the interactive method may further include: acquiring audio to be played; and playing the audio as background audio. For this purpose, acquiring the input to the playing area of the audio editing page includes: in the playing process of the background audio, acquiring the key operation of a user on a playing key in the playing area of the audio editing page; and playing the corresponding sound includes: playing the background audio on which the corresponding sound effect is superimposed.
Further, a preview operation can also be performed within the audio editing page. To this end, the interaction method may further include: obtaining a preview input for the user (e.g., the user clicks the preview button in FIG. 4 or FIG. 5); playing a previously generated sound effect progress bar in the audio editing page, wherein the sound effect progress bar comprises a sound mark generated based on previous editing; and synchronously playing the audio corresponding to the sound effect progress bar with the sound effect progress bar.
Likewise, the audio trigger zone may include: and a plurality of playing keys which are displayed simultaneously and belong to the same sound classification. At this time, the interaction method may further include: acquiring the selection of a user on a corresponding sound effect tab; and switching and displaying a plurality of playing keys belonging to different sound classifications, wherein the different sound classifications comprise different musical instrument classifications and also can comprise different sound effect classifications.
Further, the user can perform custom editing on the sound effect. To this end, the interaction method may further include: acquiring the sampling operation input by the user; displaying a sampling page; acquiring the editing operation of the user in the sampling page; and storing the edited sample and corresponding to a custom playing key.
Further, the user can also adjust the existing sound effect. To this end, the interaction method may further include: acquiring sound effect adjusting operation input by the user; displaying a sound effect adjusting page; acquiring the editing operation of the user in the sound effect adjusting page; and incorporating the adjusted sound effects in the audio.
FIGS. 7A-B show an effect editing example according to the present invention. The user may click on the "sample" button in the upper right hand corner of the audio editing page shown in fig. 4 or 5, for example, to enter the edit sample page shown in fig. 7A. The user can make adjustments to volume, speed, pitch, etc. in the page, click to save after previewing satisfaction, and establish a mapping with a sound effect classification such as a self-defined sound trigger button in a "custom" tab. In addition, the user may click on the "…" button on the right side of the sound and melody option card in the audio editing page shown in fig. 4 or 5, for example, and select sound effect editing to enter the sound effect adjustment page shown in fig. 7B. The user can adjust the volume of a certain sound effect (e.g., the sound of a sea wave) itself and fine-tune its position in the audio, e.g., 20ms ahead, and incorporate the adjusted sound effect in the audio.
Further, the present invention can also be realized as an audio editing method. Fig. 8 shows a schematic flow chart of an audio editing method according to the invention. The method is particularly applicable to mobile terminals such as smart phones with touch screens having limited display area. The mobile terminal has an audio input function and comprises an audio editing APP or a singing APP which can be installed and comprises the interactive function of the invention.
In step S810, an audio editing page including an audio rhythm display region and an audio trigger region is displayed. In step S820, the triggering operation of the sound trigger key in the audio trigger area is acquired. In step S830, a sound effect corresponding to the sound trigger key operation is played, and a play display corresponding to the trigger operation is displayed in the audio rhythm display area.
Specifically, the operation of the user on the sound trigger key in the audio trigger area may be acquired in a case where the audio progress bar in the audio rhythm display area is advanced. When a sound effect corresponding to the sound trigger key operation is played, a progressive audio progress bar may be displayed in the audio rhythm display area, for example, a corresponding sound mark may be displayed in the audio progress bar in the audio rhythm display area.
When editing is performed on background audio, the method can further comprise loading and playing existing audio based on user operation, wherein the audio progress bar is used for representing the existing audio.
Further, within the audio trigger area may include: the sound trigger keys belong to the same sound classification, and one sound classification comprises different pitches of the same musical instrument or different sound effects in the same sound effect.
The audio editing function of the present invention is also applicable to various scenes. In one embodiment, the present invention may also be implemented as an audio/video editing method, including: displaying an audio editing page comprising an audio rhythm display area, an audio trigger area and a video display area; acquiring the triggering operation of a user on a sound triggering key in the audio triggering area; and playing a sound effect corresponding to the sound trigger key operation, and displaying playing display corresponding to the trigger operation in the audio rhythm display area.
Specifically, the operation of the user on the sound trigger key in the audio trigger area may be acquired in a case where the audio progress bar in the audio rhythm display area is advanced. And displaying the advancing audio progress bar in the audio rhythm display area while playing the sound effect corresponding to the sound trigger key operation. Here, the video display area is an area for displaying video.
The video presentation may be synchronized with the audio playback, and the method may then further comprise: displaying corresponding video content in the video presentation area while displaying the advancing audio progress bar in the audio rhythm display area.
The audio and video editing method may be a method capable of only editing audio, and may further include a video editing function. In this case, the tabs in the audio trigger area may include video editing, such as video watermarking, text entry tabs, in addition to the sound tabs. After clicking the tab, a video editing button may be displayed in the audio trigger area. To this end, the method may further include: and acquiring the operation of the user on the video editing key in the audio trigger area.
The audio editing function of the present invention can also be used in live scenes. To this end, a live method may include: acquiring and displaying an audio editing page for audio editing by a broadcaster, wherein the audio editing page comprises an audio rhythm display area and an audio trigger area; acquiring a trigger operation of a broadcast master user on a sound trigger button in the audio trigger area, for example, acquiring the operation of the broadcast master on the sound trigger button in the audio trigger area under the condition that an audio progress bar in the audio rhythm display area advances; and playing a sound effect corresponding to the sound trigger key operation, and displaying a playing display corresponding to the trigger operation in the audio rhythm display area, such as displaying a traveling audio progress bar and a sound mark therein. In other words, the broadcaster can directly play the audio editing page as a direct-broadcasting picture, and can accept the sound trigger operation of the viewer by opening the interactive function. Further, a video information stream associated with the broadcaster may also be acquired and displayed. For example, when a user watches a live broadcast using a large-screen terminal, the broadcaster's own video stream and sound can be displayed on the same screen.
The audio editing function of the present invention can also be used in screen sharing scenarios, such as an online lecture scenario, a remote assistance scenario, and the like. To this end, an audio editing sharing method includes: simultaneously displaying an audio editing page comprising an audio rhythm display area and an audio trigger area to a plurality of users; acquiring a trigger operation of a current operation user on a sound trigger key in the audio trigger area, for example, acquiring an operation of the current operation user on the sound trigger key in the audio trigger area under the condition that an audio progress bar in the audio rhythm display area advances; and playing a sound effect corresponding to the sound trigger key operation to the plurality of users, and displaying a playing display corresponding to the trigger operation in the audio rhythm display area, such as displaying a traveling audio progress bar and a sound mark therein. Specifically, the user can become the current operation user capable of operating the page by selecting the "i'm to operate" function or switching the identity of the executor.
As previously mentioned, the audio editing functions of the present invention may be implemented as part of a singing APP. Accordingly, the present invention can also be implemented as an audio producing method, including: acquiring an audio performance work of a user; entering an audio editing page comprising an audio rhythm display area and an audio trigger area; acquiring a trigger operation of a user on a sound trigger key in the audio trigger area under the condition that an audio progress bar in the audio rhythm display area advances, wherein the audio progress bar corresponds to the audio performance work; and superposing and playing a sound effect corresponding to the operation of the sound trigger key on the audio performance work, and displaying a progressive audio progress bar in the audio rhythm display area. Obtaining the user's audio performance composition may include: and inputting the audio of the user performance. The entry may be a part of a video entry, or may be a pure audio entry, and may enter a user's singing (for example, using a physical musical instrument or a playing page in APP, which may also include a sound trigger button), or may also enter a user's singing (for example, karaoke) scene under background sound, for example.
The above-described method of the invention can be implemented as a corresponding apparatus. FIG. 9 is a schematic diagram illustrating the components of a page rendering apparatus according to an embodiment of the present invention. The page presentation device 900 may include an audio editing page display unit 910 and an audio editing page refresh unit 920. The audio editing page display unit 910 is configured to display an audio editing page including an audio rhythm display region and an audio trigger region. The audio editing page refreshing unit 920 is configured to display the audio rhythm display area in response to a trigger signal of the audio trigger area, for example, a corresponding sound mark may be displayed in an audio progress bar in the audio rhythm display area in response to a user operating a sound trigger key in the audio trigger area.
Fig. 10 is a schematic diagram illustrating the components of a mobile-side interaction device according to an embodiment of the present invention. The mobile terminal interaction apparatus 1000 may include an input page acquisition unit 1010, a rendering and playing unit 1020, and an audio editing unit 1030. An input page acquisition unit 1010 for acquiring an input to an audio editing page playing region; the rendering and playing unit 1020 is configured to render the input effect and play the corresponding sound; the audio editing unit 1030 is configured to complete audio editing according to the input effect and the sound.
Fig. 11 is a schematic diagram showing the composition of an audio editing apparatus according to an embodiment of the present invention. The audio editing apparatus 1100 may include an audio editing page display unit 1110, a sound trigger key selection unit 1120, and a play and display unit 1130. The audio editing page display unit 1110 is configured to display an audio editing page including an audio rhythm display region and an audio trigger region. The sound trigger key selection unit 1120 is configured to obtain a trigger operation of the sound trigger key in the audio trigger area by the user, for example, obtain an operation of the sound trigger key in the audio trigger area by the user in a case that the audio progress bar in the audio rhythm display area travels. The playing and display unit 1130 is configured to play a sound effect corresponding to a sound trigger key operation, and display a playing display corresponding to the trigger operation, for example, a traveling audio progress bar and a sound mark therein, in the audio progress bar in the audio rhythm display area.
The above-described devices may all be implemented as virtual devices for APPs, as a result of performing the methods described above in connection with fig. 1-8. Specifically, the page rendering apparatus 900 may be implemented by combining the page rendering and refreshing functions of the mobile terminal, and the mobile terminal interaction apparatus 1000 and the audio editing apparatus 1100 may be implemented as a functional module of an audio editing APP or a singing APP.
Fig. 12 is a schematic structural diagram of a computing device that can be used to implement the page rendering, mobile terminal interaction, and audio editing methods described above according to an embodiment of the invention.
Referring to fig. 12, computing device 1200 includes memory 1210 and processor 1220. The computing device 1200 may be a mobile device equipped with an audio output unit, such as a self-contained speaker or audio output via a wired or wireless connection to headphones or a speaker box. For example, the computing device may be a touchscreen equipped smartphone, preferably with recording functionality, to enable, for example, the entry of singing.
Processor 1220 may be a multi-core processor or may include multiple processors. In some embodiments, processor 1220 may include a general-purpose host processor and one or more special purpose coprocessors such as a Graphics Processor (GPU), Digital Signal Processor (DSP), or the like. In some embodiments, the processor 1220 may be implemented using custom circuitry, such as an Application Specific Integrated Circuit (ASIC) or a Field Programmable Gate Array (FPGA).
The memory 1210 may include various types of storage units, such as system memory, Read Only Memory (ROM), and permanent storage. Wherein the ROM may store static data or instructions for the processor 1220 or other modules of the computer. The persistent storage device may be a readable and writable storage device. The persistent storage may be a non-volatile storage device that does not lose stored instructions and data even after the computer is powered off. In some embodiments, the persistent storage device employs a mass storage device (e.g., magnetic or optical disk, flash memory) as the persistent storage device. In other embodiments, the permanent storage may be a removable storage device (e.g., floppy disk, optical drive). The system memory may be a read-write memory device or a volatile read-write memory device, such as a dynamic random access memory. The system memory may store instructions and data that some or all of the processors require at runtime. In addition, memory 1210 may include any combination of computer-readable storage media, including various types of semiconductor memory chips (DRAM, SRAM, SDRAM, flash, programmable read-only memory), magnetic disks, and/or optical disks. In some embodiments, memory 1210 may include a removable storage device that is readable and/or writable, such as a Compact Disc (CD), a read-only digital versatile disc (e.g., DVD-ROM, dual layer DVD-ROM), a read-only Blu-ray disc, an ultra-high density optical disc, a flash memory card (e.g., SD card, min SD card, Micro-SD card, etc.), a magnetic floppy disc, or the like. Computer-readable storage media do not contain carrier waves or transitory electronic signals transmitted by wireless or wired means.
The memory 1210 has stored thereon executable code that, when processed by the processor 1220, may cause the processor 1220 to perform the page rendering and mobile-side interaction methods described above.
The page rendering and mobile terminal interaction scheme according to the present invention has been described in detail above with reference to the accompanying drawings. The audio editing scheme of the invention can realize the music production effect similar to a Digital Audio Workstation (DAW) at a computer end on a mobile end by realizing the pre-browsing of the playing and music composing music score on the same screen at the mobile end and using the key layout (such as 12-pad layout) more suitable for the operation of a mobile phone. The scheme can also generate melody prompts based on the chord music score, and the threshold of the user for creating melodies is reduced.
Furthermore, the method according to the invention may also be implemented as a computer program or computer program product comprising computer program code instructions for carrying out the above-mentioned steps defined in the above-mentioned method of the invention.
Alternatively, the invention may also be embodied as a non-transitory machine-readable storage medium (or computer-readable storage medium, or machine-readable storage medium) having stored thereon executable code (or a computer program, or computer instruction code) which, when executed by a processor of an electronic device (or computing device, server, etc.), causes the processor to perform the steps of the above-described method according to the invention.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems and methods according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (45)

1. A page rendering method, comprising:
displaying an audio editing page comprising an audio trigger area and an audio rhythm display area; and
the audio rhythm display area is displayed in response to a trigger signal of the audio trigger area.
2. The method of claim 1, wherein the audio rhythm display region displaying in response to a trigger signal of an audio trigger region comprises:
and responding to the operation of the sound trigger key in the audio trigger area, and displaying a corresponding sound mark in the audio rhythm display area.
3. The method of claim 2, wherein the audio tempo display region comprises:
a fixed time line or point of time; and
a progress bar moving over time, or
A time line or point of time moving over time; and
a fixed progress bar.
4. The method of claim 3, wherein in response to operation of a sound trigger key within the audio trigger area, the audio rhythm display area displaying a corresponding sound indicia comprises:
and responding to the operation of the sound trigger key in the audio trigger area, and synchronously displaying line segments with corresponding lengths or unit lengths in the audio rhythm display area.
5. The method of claim 1, wherein the audio trigger region displayed comprises:
a plurality of sound trigger keys belonging to the same sound classification.
6. The method of claim 5, wherein the layout of the plurality of sound trigger keys is different from the triggering layout of the original musical instrument.
7. The method of claim 6, wherein the plurality of acoustically activated keys comprises:
and a plurality of sound trigger keys displayed in a plurality of rows and a plurality of columns.
8. The method of claim 6, wherein the plurality of acoustically activated keys comprises:
a plurality of sound activated keys displayed as a virtual pad keyboard.
9. The method of claim 6, wherein the audio trigger zone comprises:
and a sound option for switching the plurality of sound trigger keys displayed by category.
10. The method of claim 9, further comprising:
and responding to the selection operation of the corresponding sound option, and switching and displaying a plurality of sound trigger keys belonging to different sound classifications.
11. The method of claim 10, wherein the different sound classifications include:
different instrument classifications; or different sound effect classifications.
12. The method of claim 11, wherein the sound trigger keys correspond to different pitches or different sub-sound effects.
13. The method of claim 12, wherein the sound markers correspond to different pitches or sound effects in different displays.
14. The method of claim 6, further comprising:
and converting the display of one or more sound trigger keys in the audio trigger area as an operation prompt.
15. The method of claim 14, wherein transforming display of one or more acoustically-activated keys within the audio trigger zone as operational cues comprises:
and simultaneously, the display of a plurality of sound trigger keys in the audio trigger area is changed to be used as chord input operation prompts.
16. The method of claim 6, further comprising:
and responding to the operation of the sound trigger key in the audio trigger area, and transforming the display of the corresponding sound trigger key.
17. The method of claim 6, further comprising:
and responding to the preview operation, and playing a sound effect progress bar generated previously, wherein the sound effect progress bar comprises the display of the audio rhythm display area responding to the trigger signal of the audio trigger area previously.
18. The method of claim 17, wherein the display of the audio rhythm display region previously responsive to the trigger signal of the audio trigger region includes a sound marker,
and the method further comprises:
and when the sound mark generated before is played, the display of the sound trigger key corresponding to the sound mark in the audio trigger area is changed.
19. The method of claim 1, wherein the audio editing page further comprises a video presentation area, and the method further comprises:
displaying video content within the video presentation area.
20. The method of claim 19, wherein displaying video content within the video presentation area comprises at least one of:
displaying video content synchronously with displaying an audio progress bar in the audio rhythm display area; and
displaying video content in the video display area in place of the audio rhythm display area or the audio trigger area;
displaying video content within the video presentation area as a floating window.
21. The method of claim 1, wherein the audio rhythm display region displaying in response to a trigger signal of an audio trigger region comprises:
establishing mapping between external input and sound trigger keys in the audio trigger area;
and operating the sound trigger key by using the external input.
22. A mobile terminal interaction method comprises the following steps:
acquiring input of an audio editing page playing area;
rendering the input effect and playing corresponding sound;
and finishing audio editing according to the input effect and the sound.
23. The method of claim 22, wherein rendering the input effects of the input comprises:
in response to the captured input, displaying a corresponding sound indicia within the audio rhythm display region.
24. The method of claim 23, further comprising:
acquiring audio to be played;
playing the audio as background audio, and
acquiring the input to the playing area of the audio editing page comprises:
in the playing process of the background audio, obtaining the key operation of the playing key in the playing area of the audio editing page,
playing the corresponding sound includes:
and playing the background audio superposed with the corresponding sound effect.
25. The method of claim 22, further comprising:
acquiring a preview operation;
playing a previously generated sound effect display in the audio editing page, the sound effect display including sound markers generated based on previous rendering; and
and synchronously playing the corresponding sound mark with the sound effect progress bar.
26. The method of claim 22, wherein the audio trigger zone comprises:
a plurality of playing keys belonging to the same sound class, and
the method further comprises the following steps:
acquiring selection operation of a corresponding sound effect option card;
a plurality of playing keys belonging to different sound classifications are displayed in a switching manner,
wherein the different sound classifications include the following:
different instrument classifications; or
Different sound effect classifications.
27. The method of claim 26, further comprising:
acquiring a sampling operation;
displaying a sampling page;
acquiring editing operation in the sampling page; and
storing the edited sample and corresponding to a custom playing key.
28. The method of claim 26, further comprising:
acquiring sound effect adjustment operation;
displaying a sound effect adjusting page;
acquiring editing operation in the sound effect adjusting page; and
incorporating an adjusted play key in the audio.
29. An audio editing method comprising:
displaying an audio editing page comprising an audio rhythm display area and an audio trigger area;
acquiring triggering operation of a sound triggering key in the audio triggering area; and
and playing a sound effect corresponding to the sound trigger key operation, and displaying playing display corresponding to the trigger operation in the audio rhythm display area.
30. The method of claim 29, wherein playing an audio effect corresponding to an acoustic trigger key operation and displaying a play display corresponding to the trigger operation in the audio rhythm display region comprises:
and playing a sound effect corresponding to the operation of the sound trigger key, and displaying a corresponding sound mark in the audio rhythm display area.
31. The method of claim 29, further comprising:
existing audio is loaded and played based on a user operation,
wherein the audio progress bar is used for representing the existing audio.
32. The method of claim 31, wherein within the audio trigger zone comprises:
the sound trigger keys belong to the same sound classification, and each sound trigger key corresponds to different pitches or different sub-sound effects.
33. An audio-video editing method comprising:
displaying an audio editing page comprising an audio rhythm display area, an audio trigger area and a video display area;
acquiring the triggering operation of a user on a sound triggering key in the audio triggering area; and
and playing a sound effect corresponding to the trigger operation, and displaying playing display corresponding to the trigger operation in the audio rhythm display area.
34. The method of claim 33, further comprising:
and displaying the playing display corresponding to the triggering operation in the audio rhythm display area, and simultaneously displaying corresponding video content in the video display area.
35. The method of claim 34, further comprising:
switching the audio trigger area into a video editing area based on user operation; and
and acquiring the operation of the video editing area.
36. A live method, comprising:
acquiring and displaying an audio editing page for audio editing by a broadcaster, wherein the audio editing page comprises an audio rhythm display area and an audio trigger area;
acquiring the trigger operation of the broadcaster on the sound trigger key in the audio trigger area; and
and playing a sound effect corresponding to the sound trigger key operation, and displaying playing display corresponding to the trigger operation in the audio rhythm display area.
37. The method of claim 36, further comprising:
a video information stream associated with the broadcaster is acquired and displayed.
38. An audio editing sharing method, comprising:
simultaneously displaying an audio editing page comprising an audio rhythm display area and an audio trigger area to a plurality of users;
acquiring the triggering operation of a current operation user on a sound triggering key in the audio triggering area; and
and playing a sound effect corresponding to the sound trigger key operation to the plurality of users, and displaying the playing display corresponding to the trigger operation in the audio rhythm display area.
39. A method of audio production, comprising:
acquiring an audio performance work;
entering an audio editing page comprising an audio rhythm display area and an audio trigger area;
acquiring a trigger operation of a user on a sound trigger key in the audio trigger area under the condition that an audio progress bar in the audio rhythm display area advances, wherein the audio progress bar corresponds to the audio performance work; and
and superposing and playing a sound effect corresponding to sound trigger key operation on the audio performance works, and displaying the playing display corresponding to the trigger operation in the audio rhythm display area.
40. The method of claim 39, wherein obtaining the audio performance work comprises:
and inputting the audio of the user performance.
41. A page rendering apparatus, comprising:
the audio editing page display unit is used for displaying an audio editing page comprising an audio rhythm display area and an audio trigger area; and
and the audio editing page refreshing unit is used for displaying the audio rhythm display area in response to the trigger signal of the audio trigger area.
42. A mobile-side interaction device, comprising:
an input page acquisition unit for acquiring an input to an audio editing page playing region;
the rendering and playing unit is used for rendering the input effect and playing the corresponding sound;
and the audio editing unit is used for finishing audio editing according to the input effect and the sound.
43. An audio editing apparatus comprising:
the audio editing page display unit is used for displaying an audio editing page comprising an audio rhythm display area and an audio trigger area;
the voice trigger key selection unit is used for acquiring the trigger operation of the voice trigger key in the audio trigger area by a user; and
and the playing and displaying unit is used for playing the sound effect corresponding to the sound trigger key operation and displaying the playing display corresponding to the trigger operation in the audio rhythm display area.
44. A computing device, comprising:
a processor; and
a memory having executable code stored thereon, which when executed by the processor, causes the processor to perform the method of any one of claims 1-40.
45. A non-transitory machine-readable storage medium having stored thereon executable code, which when executed by a processor of an electronic device, causes the processor to perform the method of any of claims 1-40.
CN202010313949.2A 2020-04-20 2020-04-20 Method and device for page presentation, mobile terminal interaction and audio editing Pending CN113535289A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010313949.2A CN113535289A (en) 2020-04-20 2020-04-20 Method and device for page presentation, mobile terminal interaction and audio editing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010313949.2A CN113535289A (en) 2020-04-20 2020-04-20 Method and device for page presentation, mobile terminal interaction and audio editing

Publications (1)

Publication Number Publication Date
CN113535289A true CN113535289A (en) 2021-10-22

Family

ID=78123698

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010313949.2A Pending CN113535289A (en) 2020-04-20 2020-04-20 Method and device for page presentation, mobile terminal interaction and audio editing

Country Status (1)

Country Link
CN (1) CN113535289A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024012257A1 (en) * 2022-07-12 2024-01-18 北京字跳网络技术有限公司 Audio processing method and apparatus, and electronic device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201260223Y (en) * 2008-06-19 2009-06-17 宇龙计算机通信科技(深圳)有限公司 Ring editor
CN104615586A (en) * 2015-01-21 2015-05-13 上海理工大学 Real-time cooperative editing system
CN105794213A (en) * 2013-11-26 2016-07-20 谷歌公司 Collaborative video editing in cloud environment
CN205541537U (en) * 2016-02-04 2016-08-31 商丘职业技术学院 Teaching piano
US20170047082A1 (en) * 2015-08-10 2017-02-16 Samsung Electronics Co., Ltd. Electronic device and operation method thereof
CN110134479A (en) * 2019-05-10 2019-08-16 杭州网易云音乐科技有限公司 Content page exchange method, generation method, medium, device and calculating equipment
CN110971957A (en) * 2018-09-30 2020-04-07 阿里巴巴集团控股有限公司 Video editing method and device and mobile terminal

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201260223Y (en) * 2008-06-19 2009-06-17 宇龙计算机通信科技(深圳)有限公司 Ring editor
CN105794213A (en) * 2013-11-26 2016-07-20 谷歌公司 Collaborative video editing in cloud environment
CN104615586A (en) * 2015-01-21 2015-05-13 上海理工大学 Real-time cooperative editing system
US20170047082A1 (en) * 2015-08-10 2017-02-16 Samsung Electronics Co., Ltd. Electronic device and operation method thereof
CN205541537U (en) * 2016-02-04 2016-08-31 商丘职业技术学院 Teaching piano
CN110971957A (en) * 2018-09-30 2020-04-07 阿里巴巴集团控股有限公司 Video editing method and device and mobile terminal
CN110134479A (en) * 2019-05-10 2019-08-16 杭州网易云音乐科技有限公司 Content page exchange method, generation method, medium, device and calculating equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024012257A1 (en) * 2022-07-12 2024-01-18 北京字跳网络技术有限公司 Audio processing method and apparatus, and electronic device

Similar Documents

Publication Publication Date Title
CN111899706B (en) Audio production method, device, equipment and storage medium
JP2021516787A (en) An audio synthesis method, and a computer program, a computer device, and a computer system composed of the computer device.
US20120014673A1 (en) Video and audio content system
US20180226101A1 (en) Methods and systems for interactive multimedia creation
US20220093132A1 (en) Method for acquiring video and electronic device
US9601029B2 (en) Method of presenting a piece of music to a user of an electronic device
CN112883223A (en) Audio display method and device, electronic equipment and computer storage medium
KR101414217B1 (en) Real time image synthesis apparatus and image synthesis method
JP2012083563A (en) Voice synthesizer and program
CN113535289A (en) Method and device for page presentation, mobile terminal interaction and audio editing
JP2006308729A (en) Karaoke playing machine
JP4720974B2 (en) Audio generator and computer program therefor
JP4353423B2 (en) Karaoke performance device
JP2006208959A (en) Karaoke playing apparatus
JP2000125199A (en) Method and system for displaying song caption on screen and for changing color of the caption in matching with music
WO2024093798A1 (en) Music composition method and apparatus, and electronic device and readable storage medium
JP2003271158A (en) Karaoke device having image changing function and program
JP4382631B2 (en) Harmony singing guide system
JP5660334B2 (en) Karaoke device and karaoke program
KR20020059154A (en) Apparatus for authoring a music video
JP2006126523A (en) Karaoke player
JP2016033674A (en) Voice synthesizing device and voice synthesizing method
CN115116488A (en) Integrated touch screen multi-track recording and composing method
WO2018147286A1 (en) Display control system and display control method
Joslin Seven Attempts at Magic: A Digital Portfolio Dissertation of Seven Interactive, Electroacoustic, Compositions for Data-driven Instruments.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220321

Address after: 510627 room 1701, No. 163, Pingyun Road, Tianhe District, Guangzhou City, Guangdong Province (Location: self compiled room 01) (office only)

Applicant after: Guangzhou Huancheng culture media Co.,Ltd.

Address before: 100102 901, floor 9, building 9, zone 4, Wangjing Dongyuan, Chaoyang District, Beijing

Applicant before: Beijing wall breaker Technology Co.,Ltd.