CN108847207B - Interactive intelligent device and music editing method and device thereof - Google Patents

Interactive intelligent device and music editing method and device thereof Download PDF

Info

Publication number
CN108847207B
CN108847207B CN201810871375.3A CN201810871375A CN108847207B CN 108847207 B CN108847207 B CN 108847207B CN 201810871375 A CN201810871375 A CN 201810871375A CN 108847207 B CN108847207 B CN 108847207B
Authority
CN
China
Prior art keywords
music
staff
symbol
music symbol
line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810871375.3A
Other languages
Chinese (zh)
Other versions
CN108847207A (en
Inventor
牛彦杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Guangzhou Shirui Electronics Co Ltd
Original Assignee
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Guangzhou Shirui Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Shiyuan Electronics Thecnology Co Ltd, Guangzhou Shirui Electronics Co Ltd filed Critical Guangzhou Shiyuan Electronics Thecnology Co Ltd
Priority to CN201810871375.3A priority Critical patent/CN108847207B/en
Publication of CN108847207A publication Critical patent/CN108847207A/en
Application granted granted Critical
Publication of CN108847207B publication Critical patent/CN108847207B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H7/00Instruments in which the tones are synthesised from a data store, e.g. computer organs
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/101Music Composition or musical creation; Tools or processes therefor
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2220/00Input/output interfacing specifically adapted for electrophonic musical tools or instruments
    • G10H2220/091Graphical user interface [GUI] specifically adapted for electrophonic musical instruments, e.g. interactive musical displays, musical instrument icons or menus; Details of user interactions therewith
    • G10H2220/101Graphical user interface [GUI] specifically adapted for electrophonic musical instruments, e.g. interactive musical displays, musical instrument icons or menus; Details of user interactions therewith for graphical creation, edition or control of musical data or parameters
    • G10H2220/106Graphical user interface [GUI] specifically adapted for electrophonic musical instruments, e.g. interactive musical displays, musical instrument icons or menus; Details of user interactions therewith for graphical creation, edition or control of musical data or parameters using icons, e.g. selecting, moving or linking icons, on-screen symbols, screen regions or segments representing musical elements or parameters
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2220/00Input/output interfacing specifically adapted for electrophonic musical tools or instruments
    • G10H2220/091Graphical user interface [GUI] specifically adapted for electrophonic musical instruments, e.g. interactive musical displays, musical instrument icons or menus; Details of user interactions therewith
    • G10H2220/101Graphical user interface [GUI] specifically adapted for electrophonic musical instruments, e.g. interactive musical displays, musical instrument icons or menus; Details of user interactions therewith for graphical creation, edition or control of musical data or parameters
    • G10H2220/121Graphical user interface [GUI] specifically adapted for electrophonic musical instruments, e.g. interactive musical displays, musical instrument icons or menus; Details of user interactions therewith for graphical creation, edition or control of musical data or parameters for graphical editing of a musical score, staff or tablature
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2240/00Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
    • G10H2240/011Files or data streams containing coded musical information, e.g. for transmission
    • G10H2240/016File editing, i.e. modifying musical data files or streams as such
    • G10H2240/021File editing, i.e. modifying musical data files or streams as such for MIDI-like files or data streams

Abstract

The invention discloses an interactive intelligent device, and a music editing method and device thereof. Wherein the method comprises the following steps: displaying a display area of the music editing application; displaying at least one line of staff in the display area according to the triggered staff control; detecting a selected music control, wherein the music control is used for representing at least one music symbol applied to the staff; acquiring the position of a music symbol represented by a music control in a staff; a composition file is generated based on the musical notation and the location of the musical notation in the staff. The method and the device solve the technical problem that the interactive intelligent equipment cannot edit music in the prior art.

Description

Interactive intelligent device and music editing method and device thereof
Technical Field
The invention relates to the field of music software, in particular to interactive intelligent equipment and a music editing method and device thereof.
Background
The whiteboard function of the interactive intelligent device is provided with a music playing or music teaching function, corresponding music can be played according to Midi files or audio files in the music playing function, and corresponding music scores can be displayed according to the depressed keys in the music teaching function.
However, these functions still cannot meet the requirement of the user for music production, and cannot be realized by the interactive intelligent device when the user needs to produce music.
Aiming at the problem that the interactive intelligent equipment in the prior art cannot edit music, no effective solution is proposed at present.
Disclosure of Invention
The embodiment of the invention provides an interactive intelligent device and a music editing method and device thereof, which at least solve the technical problem that the interactive intelligent device cannot edit music in the prior art.
According to an aspect of the embodiment of the invention, there is provided a music composing method of an interactive intelligent device, including: displaying a display area of the music editing application; displaying at least one line of staff in the display area according to the triggered staff control; detecting a selected music control, wherein the music control is used for representing at least one music symbol applied to the staff; acquiring the position of a music symbol represented by a music control in a staff; a composition file is generated based on the musical notation and the location of the musical notation in the staff.
Further, detecting a music composing mode, wherein the music composing mode comprises a single-sound-part mode and a multi-sound-part mode; generating a line staff spectrum table under the condition that the music composing mode is a single-sound part mode; and under the condition that the composing mode is a multi-part mode, generating a plurality of staff notation, wherein the staff notation at least comprises a high-pitch staff notation and a low-pitch staff notation.
Further, the music control at least includes: a first music control for representing notes and a second music control for representing rest.
Further, generating a corresponding music symbol after the music control is selected, and changing the position of the music symbol in the staff by dragging the music symbol to obtain the position of the music symbol represented by the music control in the staff, including: detecting the end position of the music symbol in the staff when the music symbol is released; the end position is determined as the position of the musical symbol in the staff.
Further, in the process that the music symbol is dragged, if the music symbol is positioned on the upper side of the staff and the distance between the music symbol and the fifth line of the staff is within a first preset range, adding n lines on the display; and displaying the m-line added downwards if the music symbol is positioned at the lower side of the staff and the distance from the first line of the staff is in a second preset range during the dragging process of the music symbol.
Further, the colors of the upper and lower m-lines are different from the color of the staff before the music symbol is released, and the color of the upper and lower m-lines where the music symbol is located is the same as the staff after the music symbol is released.
Further, after the display area of the music editing application is displayed, a first performance parameter for representing a performance tone color and a second performance parameter for representing a performance tempo are acquired.
Further, after determining the position of the music symbol represented by the music control in the staff, detecting that the play control is triggered; and playing corresponding music according to the music symbol and the position.
Further, the composition file is a Midi file, and the composition file is generated according to the music symbol and the position of the music symbol in the staff, including: recording the type of the music symbol and the position of the music symbol in the staff; and generating a composing file according to the recording result.
Further, while recording the type of the music symbol and the position of the music symbol in the staff, a pointing parameter for indicating the pointing direction of the music symbol for pointing from the music symbol to the preceding and following items of the music symbol is recorded.
Further, on the basis of the Midi file header format, determining the Midi file header of the composing file according to the Midi format of the music, the number of audio tracks of the music and the basic time format type; generating a global track, a current track and a Midi event sequence of the composing file on the basis of a Midi track block format, wherein the global track is used for representing global information of music, the current track is used for representing performance information of each track, and the Midi event sequence is used for representing continuous events in the music; and sequentially connecting the Midi file header, the global audio track, the current audio track and the Midi event sequence to obtain the composing file.
Further, the Midi event of the music symbol is mapped out according to the note time value and the tone of the music symbol; according to the pointing parameters, connecting the Midi events of each music symbol by traversing the Midi events of each music symbol to obtain the Midi event sequence of the composing file.
According to an aspect of an embodiment of the present invention, there is provided a music composing device of an interactive smart device, including: the first display module is used for displaying a display area of the music editing application; the second display module is used for displaying at least one line of staff in the display area according to the triggered staff control; the detection module is used for detecting the selected music control, wherein the music control is used for representing music symbols in at least one staff; the acquisition module is used for acquiring the position of the music symbol represented by the music control in the staff; and the generation module is used for generating a composing file according to the music symbol and the position of the music symbol in the staff.
According to an aspect of an embodiment of the present invention, there is provided a storage medium including a stored program, wherein the program controls a device in which the storage medium is located to execute the steps of: displaying a display area of the music editing application; displaying at least one line of staff in the display area according to the triggered staff control; detecting a selected music control, wherein the music control is used for representing at least one music symbol applied to the staff; acquiring the position of a music symbol represented by a music control in a staff; a composition file is generated based on the musical notation and the location of the musical notation in the staff.
According to an aspect of an embodiment of the present invention, there is provided an intelligent interactive tablet, including a memory for storing information including program instructions and a processor for controlling execution of the program instructions, wherein: the program instructions when loaded and executed by the processor implement the steps of: displaying a display area of the music editing application; displaying at least one line of staff in the display area according to the triggered staff control; detecting a selected music control, wherein the music control is used for representing at least one music symbol applied to the staff; acquiring the position of a music symbol represented by a music control in a staff; a composition file is generated based on the musical notation and the location of the musical notation in the staff.
In the embodiment of the invention, a display area of the music editing application is displayed; displaying at least one line of staff in the display area according to the triggered staff control; detecting a selected music control, wherein the music control is used for representing at least one music symbol applied to the staff; acquiring the position of a music symbol represented by a music control in a staff; a composition file is generated based on the musical notation and the location of the musical notation in the staff. The scheme provides the music editing application capable of freely editing music on the staff on the interactive intelligent device for the user, and generates the music editing result into the music editing file, so that the technical problem that the interactive intelligent device cannot edit music in the prior art is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiments of the invention and together with the description serve to explain the invention and do not constitute a limitation on the invention. In the drawings:
FIG. 1 is a flow chart of a method of composing a music book for an interactive smart device according to embodiment 1 of the present invention;
FIG. 2 is a schematic illustration of a composition application interface according to embodiment 1 of the present application;
FIG. 3 is a schematic illustration of adding musical symbols in a staff notation according to embodiment 1 of the present application;
FIG. 4 is a schematic illustration of a recorded result according to example 1 of the present application;
FIG. 5 is a schematic illustration of another recorded result in example 1 according to the present application;
FIG. 6 is a schematic diagram of note orientations according to embodiments of the present application;
FIG. 7 is a schematic diagram of a Midi file header structure in the related art;
FIG. 8 is a schematic diagram of a track block structure according to the related art;
FIG. 9 is a schematic diagram of a sequence of generated Midi events according to embodiment 1 of the present application;
fig. 10 is a flowchart of a method for editing music of an interactive intelligent device according to embodiment 2 of the present invention; and
fig. 11 is a schematic diagram of a music composing device of an interactive intelligent device according to embodiment 3 of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
According to an embodiment of the present invention, there is provided an embodiment of a method for composing an interactive smart device, it should be noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order different from that herein.
Fig. 1 is a flowchart of a music composing method of an interactive intelligent device according to an embodiment of the present invention.
The method for composing the interactive intelligent device provided by the embodiment can be executed by a composing display device, the composing display device can be realized by software and/or hardware, and the composing display device can be composed of two or more physical entities or one physical entity. In the embodiment, the description is given taking the interactive intelligent tablet as the writing display device as an example, where the interactive intelligent tablet may be an integrated device that controls the content displayed on the display tablet through a touch technology and implements man-machine interaction operation, and integrates one or more functions of a projector, an electronic whiteboard, a curtain, a sound device, a television, a video conference terminal, and the like.
In an embodiment, the interactive smart tablet establishes a data connection with at least one external device. External devices include, but are not limited to: a mobile phone, a notebook computer, a USB flash disk, a tablet computer, a desktop computer and the like.
The embodiment of the communication mode of the data connection between the external device and the interactive intelligent tablet is not limited, and the communication modes such as USB connection, internet, local area network, bluetooth, wi-Fi or purple peak protocol (ZigBee) can be used.
Alternatively, the interactive intelligent panel is provided with the music composing application software, and the music composing application software can be pre-installed in the interactive intelligent panel, or can be downloaded from a third party device or a server and installed for use when the interactive intelligent panel starts the music composing application. Wherein the third party device is not limited in embodiments. Specifically, the music composing application software is used for realizing the following scheme.
As shown in fig. 1, the method comprises the steps of:
step S11, displaying a display area of the music editing application.
Specifically, the music composing application may be a single application software installed in the interactive intelligent device, or may be a functional application integrated on a whiteboard application or other applications. The music composing application is used for the user to freely compose music on the interactive intelligent equipment and record music files obtained by composing the music. The music composing application corresponds to a display window and is used for displaying an application interface of the music composing application, and a display area of the music composing application is the window corresponding to the music composing application.
In an alternative embodiment, the user initiates the interactive smart device's whiteboard application and initiates the composition control in the whiteboard application, thereby initiating the composition application.
Step S13, displaying at least one line of staff in a display area according to the triggered staff control;
specifically, the staff control is a control set in the composing application interface and is used for generating the staff. The position of the staff control set in the composition application interface is not specifically limited in this application.
Fig. 2 is a schematic diagram of a composition application interface according to embodiment 1 of the present application. In an alternative embodiment, as shown in connection with fig. 1, the blank area is a staff editing area for displaying a staff edited by a user. The left side of the interface comprises three controls, wherein the lowest control named as a staff control is a staff control, a user clicks the staff control, a blank area displays a line of staff, or the user drags the staff control to the blank area, and the blank area displays a line of staff.
More specifically, displaying at least one line of staff in the presentation area includes:
step S131, detecting a music composing mode, wherein the music composing mode comprises a single-sound part mode and a multi-sound part mode.
Specifically, the single-sound part mode is used for generating a single music composition without accompaniment, and the multi-sound part mode refers to the generated music including other accompaniment in addition to the main melody.
The mono mode and the multi-mono mode may be selected by the user or may default to mono mode if not selected by the user.
In an alternative embodiment, still referring to fig. 2, a plurality of selectable menus are included above the blank area, where the modes are the "music mode" described above, and include a single hand and two hands, where the single hand corresponds to the single-tone portion mode and the two hands represent the multiple-tone portion mode.
In step S133, when the music composing mode is the mono mode, a line staff is generated.
If the composing mode is a mono mode, only one line of staff may be displayed, and the position where the line of staff is displayed in the blank area may be a preset position. When the staff of the row is full of notes (e.g., the number of notes displayed for the row reaches a preset number), or when the staff control is triggered again by the user, a new row of staff is generated.
In step S135, in the case that the composition mode is the multi-vocal mode, a multi-line staff is generated, and the multi-line staff includes at least one high-pitch staff and one low-pitch staff.
Specifically, the number of lines that generate the staff is the same as the number of vocal parts. For example: the user selects the binaural mode, the binaural comprises a high-sound part and a low-sound part, the staff of different sound parts is provided with corresponding sound part identifications, namely a high-sound staff and a low-sound staff, and the user can respectively write the main melody and the accompaniment melody in the two-line staff.
Step S15, detecting the selected music control, wherein the music control is used for representing at least one music symbol applied to the staff.
Specifically, the music control includes: a first music control for representing notes and a second music control for representing rest.
In addition to the music symbols for the staff in both notes and rest, a variety of other music symbols may be included, such as: a change note, a delay note, an accent, etc.
In an alternative embodiment, as shown in connection with FIG. 2, the first "note control" and the second "rest" control on the left are both music symbols as described above. In the interface schematic, only two music symbols of notes and rest are shown, and optionally, the left control bar can include a scroll bar, and other music symbols can be displayed for the user to scroll down.
And S17, acquiring the position of the music symbol represented by the music control in the staff.
After the user selects the music control, the music symbol corresponding to the music control is required to be placed in the staff to endow the music symbol with practical significance.
Still taking fig. 2 as an example, the user clicks the "note" control to display a floating window in the composition application interface for displaying various types of notes, such as: whole notes, half notes, octaves, etc. The user drags one of the notes in the floating window into the staff, places it in the appropriate position, and releases the selected note by releasing the mouse, i.e., places it in one of the staff.
As an optional embodiment, the generating a corresponding music symbol after the music control is selected, and by dragging the music symbol, changing the position of the music symbol in the staff, obtaining the position of the music symbol represented by the music control in the staff includes:
in step S171, the position of the music symbol in the staff is detected when the music symbol is released.
In particular, the release of the musical symbol may be a mouse that clicks on the musical symbol.
Fig. 3 is a schematic illustration of adding musical symbols in a staff table according to embodiment 1 of the present application. In an alternative embodiment, taking the quarter note on the right side as an example, as shown in fig. 3, after the user selects the quarter note and drags the quarter note between the three lines and the four lines of the staff, the mouse for dragging the quarter note is released, so that the position of the quarter note can be determined to be the position shown in fig. 3.
It should be noted that, the direction of the note stems can be automatically adjusted according to a predetermined stem rule. For example: if the direction rule of the stem is that the three lines are more downward and the three lines are more upward, when the note is dragged more than the three lines according to the operation of a user, the stem is downward, and when the note is dragged less than the three lines, the stem direction is automatically changed to be upward without manual adjustment.
Yet another possible occurrence is further described, as an alternative embodiment, detecting an end position of a musical symbol in a staff when the musical symbol is released, comprising:
in step S1711, if the music symbol is located at the upper side of the staff and the distance from the fifth line of the staff is within the first predetermined range during the process of dragging the music symbol, n lines are added.
Specifically, n may be an integer greater than 0 and less than 4, and when the music control is located on the upper side of the staff and the distance from the fifth line of the staff is within the first predetermined range, it is indicated that the tone represented by the music symbol may be a tone represented by the staff exceeding the fifth line of the staff, so that n lines need to be added to be displayed.
The method of adding n lines on display can be to directly display adding one line to adding three lines, or judge according to the position of the music symbol, if the position of the music symbol is in adding one line, then display adding one line, if the position of the music symbol is in adding two lines, then display adding one line and adding two lines, and so on.
In step S1713, if the music symbol is located at the lower side of the staff and the distance from the first line of the staff is within the second predetermined range during the process of dragging the music symbol, the next m line is displayed.
Specifically, m may be an integer greater than 0 and less than 4, and when the music control is located on the upper side of the staff and the distance from the fifth line of the staff is within the second predetermined range, it is indicated that the tone represented by the music symbol may be a tone represented by a line lower than the staff, so that the m-line needs to be added down.
The method of displaying the next m line is similar to the method of displaying the next n line, and can be used for directly displaying the next m line to the next n line, judging according to the position of the music symbol, displaying the next m line if the position of the music symbol is the next m line, displaying the next m line and the next m line if the position of the music symbol is the next m line, and the like.
As an alternative embodiment, the colors of the upper and lower m-lines are different from the color of the staff before the music symbol is released, and the color of the upper and lower n-lines or the lower m-line where the music symbol is located is the same as the staff after the music symbol is released.
In the above scheme, when the music symbol is not released, if the added n line or the added m line is displayed, the music symbol may be placed on the added n line or the added m line, or may not be placed on the added n line or the added m line, so that the added n line or the added m line is displayed in a manner different from the staff table to indicate that the added n line or the added m line may possibly disappear, when the music symbol is released, if the added n line or the added m line is also placed on the added n line or the added m line, the position of the music symbol is determined, and the added n line or the added m line must be a part of the staff table, so that the color of the added n line or the added m line where the music symbol is located is the same as the staff table.
In step S1715, the position is determined as the position of the music symbol in the staff.
The staff is provided with a designated position, i.e. on the line of the staff, or between the lines of the staff. If the position of the note released by the user is not exactly at the designated position, the position of the note is finely adjusted to the designated position nearest to the designated position according to the distance between the released position and the designated position.
It should be noted that after determining the position of the music symbol, the position of the music symbol may not be locked, that is, the music symbol already on the staff may still be continuously selected for dragging, and the position of the music symbol in the staff may be adjusted.
Step S19, generating a composing file according to the music symbol and the position of the music symbol in the staff.
Specifically, the aforementioned clip file may be a Midi format file. The Midi format belongs to a binary file and consists of a plurality of sub-data with the same format. The Midi format file does not sample the music, but only generates a corresponding set of data for each note of the music. In the case that the music composing file is a Midi file, the specific steps of generating the music composing file according to the music symbol and the position of the music symbol in the staff table include:
Step S191, record the type of the music symbol and the position of the music symbol in the staff.
Specifically, the recording may be used for recording the behavior of placing the music symbol on the music playing interface without generating an intermediate file.
By using the musical notation as the musical note, the position of the musical note in the staff is used for representing the tone of the musical note, the type of the musical note is used for representing the musical note time value, and the type of the musical notation and the position of the musical notation in the staff are recorded, so that at least one pronunciation can be determined.
Fig. 4 is a schematic diagram of a recording result according to embodiment 1 of the present application, and in combination with fig. 3 and fig. 4, fig. 3 further includes a full note, and fig. 4 is a recording result of the full note in fig. 3. The recorded results include two parameters, note and stick, where note is used to represent the tone and stick is used to represent the note duration. The pitch and note duration values have a correspondence between the value and the actual value, the pitch of the whole note is the center C, the corresponding note value is 60, and the note duration of the whole note is four times the base time (the time corresponding to a quarter note is the base time, in this example, the base time is 120) 480. Thus, the note for the full note in FIG. 3 is note:60, tics: 480.
Fig. 5 is a schematic diagram of another recorded result according to embodiment 1 of the present application, and fig. 5 is a recorded result of a quarter note in fig. 3 in combination with fig. 3 and fig. 5. The recorded results still include both note and ticks parameters. The tone of the quarter note is treble C, the note value corresponding to treble C is 72, and the note duration is the base time 120, so the note for the quarter note in fig. 3 is recorded as note:72, tics: 120.
while recording the type of music symbol and the location of the music symbol in the staff, the steps of:
in step S195, a pointing parameter for indicating the direction of the music symbol for pointing from the music symbol to the preceding and following items of the music symbol is recorded.
The orientation of the musical notation is used to determine the position of the musical notation in the score, the preceding musical notation being oriented to the preceding musical notation adjacent to the musical notation, the following musical notation being oriented to the next musical notation adjacent to the musical notation, i.e. the preceding musical notation being the preceding musical notation of the following musical notation and the following musical notation being the following musical notation of the preceding musical notation, i.e. the two adjacent musical notation.
FIG. 6 is a schematic diagram of note pointing according to an embodiment of the present application. Taking the two notes of fig. 3 as an example, two adjacent notes are included in fig. 3, namely a full note on the left and a quarter note on the right, that is, the full note is the leading note of the quarter note, and the quarter note is the trailing note of the full note, so that for the full note, in the case that it is the first note, the leading note (left) does not point to any note, and the trailing note (right) points to the quarter note; the leading term (left) of a quarter note points to the full note, and the trailing term of a quarter note points to the next note.
It should be noted that the first music symbol in the staff is not preceded by other music symbols, so its antecedent points to null, i.e. left points to null. If two music symbols are discontinuous and the space between the two music symbols is empty, an empty block can be set at the empty position, the note value of the block is set to be-1, the right of the preceding music symbol points to the block, and the left of the following music symbol points to the empty block, so that the position and the sequence of each music symbol in the composition can be ensured.
By the pointing parameter carried in the music symbol, the last music symbol and the next music symbol of the current music symbol can be determined, so that the position of the music symbol in the whole composition can be determined, and the sequence of each music symbol in the composition can be determined.
Step S197, generating a composing file according to the recording result.
In the above steps, the recording result includes the tone of the music symbol, the note duration and the position in the composition (the position of the music symbol in the score can be determined by pointing to the parameters), so that the recording result needs to be converted into the format to which the composition file belongs. Specifically, the specific step of generating the album file according to the recording result may include:
step S1971, determining the Midi file header of the composing file according to the Midi format, the number of audio tracks and the basic time format type of the music on the basis of the Midi file header format.
In particular, the parameters required in the Midi file header include the number of tracks of the music and a preset base time. Fig. 7 is a schematic diagram of a Midi header structure in the related art, and in connection with fig. 7, "4d 54 68 64" is used to indicate that the part of content is a header, and each Midi file is "4d 54 68 64 00 00 00 06 ff ff nn nn dd dd" in code, where, in connection with fig. 7, a first portion 71 "ff" is used to indicate a Midi format, for example: "00" means single track, "00" means multi-track and synchronous, "00" means multi-track and unsynchronized; the second portion 72, "nn", represents the number of tracks, including the actual number of tracks plus one global track; the third section 73"dd dd" is used to represent the basic time format type including two types, the first type being the number of ticks defining one quarter note, the second type being the number of SMTPE (The Society of Motion Picture and Television Engineers, a time code concept) frames per second and the ticks per SMTPE frame.
The Midi format of the music, the number of tracks of the music, and the basic time format type may be acquired according to a user's setting, and in the case where the user does not set the first, the Midi format of the music, the number of tracks of the music, and the basic time format type may be acquired according to a default value of an application.
In step S1973, a global track of the music composing file, a current track and a Midi event sequence are generated based on the Midi track block format, wherein the global track is used for representing global information of music, the current track is used for representing performance information of each track, and the Midi event sequence is used for representing continuous events in the music.
Specifically, the track block includes a global track, a current track and a Midi event sequence, and the global information of the music may include additional information of the music (for example, copyright information, etc.), song speed or system code, etc.; the performance information of the current track may include a performance instrument, a musical instrument tone, etc. of the current track; the Midi event sequence represents musical events in music, i.e. the content in the above recorded results.
Fig. 8 is a schematic diagram of an audio track block structure according to the related art, and as shown in connection with fig. 8, "4d 54 68 b" is used to indicate that the next byte is an audio track block, a first portion 81"nn nn nn" is used to indicate the length of a data portion, and a second portion 82"data" is used to indicate audio track data information including Meta event and midi event.
As an alternative embodiment, the step of generating the Midi event sequence of the album file according to the recording result based on the Midi track block format specifically includes:
in step S19731, midi events of the music symbols are mapped out according to the note duration and the tone of the music symbols.
Specifically, the duration and the tone of the music symbol may be obtained from the recording result, where the Midi event includes two parts, i.e. a time interval and an event (event), where the time interval is the duration of the current note, and the event may include: note pitch, note pressed, note released, force pressed, velocity released, etc., where note pitch may be obtained from the recorded results and other events may be obtained from other musical symbols (e.g., enhancement notes, accent notes, etc.).
Step S19733, according to the pointing parameters, connecting the Midi events of each music symbol by traversing the Midi events of each music symbol to obtain the Midi event sequence of the composing file.
Specifically, each music symbol corresponds to a Midi event, and because the leading and trailing items of the music symbol can be determined in the pointing parameter, all Midi events of the music symbol can be traversed in sequence according to the pointing parameter, and the Midi events of the music symbol are connected in sequence to obtain the current audio track of the composing file.
Step S1975, sequentially connecting the Midi file header, the global audio track, the current audio track and the Midi event sequence to obtain the composing file.
And sequentially connecting the obtained data of the three parts of contents to obtain the edit file in the Midi file format.
As can be seen from the above, the above embodiments of the present application show the display area of the music editing application; displaying at least one line of staff in the display area according to the triggered staff control; detecting a selected music control, wherein the music control is used for representing at least one music symbol applied to the staff; acquiring the position of a music symbol represented by a music control in a staff; a composition file is generated based on the musical notation and the location of the musical notation in the staff. The scheme provides the music editing application capable of freely editing music on the staff on the interactive intelligent device for the user, and generates the music editing result into the music editing file, so that the technical problem that the interactive intelligent device cannot edit music in the prior art is solved.
Next, a description will be given of an example of a two note generation music file in fig. 3.
First, recording results of two notes as shown in fig. 4 and 5 were obtained according to the above-described embodiment.
Then generating a file header, if the user sets tone, changing the tone, otherwise defaulting to a piano; if the user sets the beat, the beat information is changed, otherwise, the user defaults to 4/4 beats.
A Midi file header is then generated: 4d 54 68 64 00 00 00 06 00 01 00 02 00 78. Specifically, "4d 54 68 64" is a header flag, "00 00 00 06" is a header length, "00" indicates that Midi file format is multitrack and synchronized, "00" indicates that the total number of tracks is 2, and "00 78" indicates a quarter note tick value.
Track blocks of Midi files are then generated, wherein their global track may be "4d 54 72 6b 00 00 00 19 00 ff 03 00 00 ff 58 04 04 02 18 08 00 ff 51 03 07 a1 20 83 d1 00 ff 2f 00"; the current track may be "4d 54 72 6b xx xx xx xx 00 ff 03 05 50 60 61 6e 6f 00 ff 04 0b 47 4d 20 50 6c 61 70 65 72 20 31 00 c0 00 00 b0 0a 40 00 b0 07 60", where "00 ff 03 05 50 60 61 6e 6f" is used to represent a sequence or track name, "00 ff 04 0b 47 4d 20 50 6c 61 70 65 72 20 31" is used to represent a musical instrument name, "00" followed by "00" in "00 c 0" is used to represent a tone color portion to be changed, and "00 b0 0a 40 00 b0 07 60" is used to represent a controller command in midi.
Next, a Midi event sequence is generated, and fig. 9 is a schematic diagram of a Midi event sequence generated according to embodiment 1 of the present application, and the two recorded results in the drawing are recorded results of two notes in fig. 3, wherein the Midi event mapped by a full note is "00 90 3c 50 83 60 80 3c 40", and the Midi event mapped by a quarter note is "00 90 3c 50 78 80 3c 40", in combination with the fig. 9.
In the following description, a Midi event obtained by full note mapping is taken as an example, and in the Midi event obtained by full note mapping, "90" indicates press, and "3c" is used to indicate note:60 "50" means the pressing force, and "83 60" means the time interval, namely, the tics: 480 "80" means lift-off, "3c" is still used to mean note:60 "40" indicates the speed of ending the pronunciation. The mapping method of the quarter notes is similar to that of the full notes, and will not be repeated here.
At this time, the file header, the global track, the current track and the Midi event sequence of the music composing file are obtained, and finally, the contents of the parts are connected, so that the music composing file of two notes in fig. 3 can be obtained: 4d 54 68 64 00 00 00 06 00 01 00 02 00 78 4d 54 72 6b 00 00 00 19 00 ff 03 00 00 ff 58 04 04 02 18 08 00 ff 51 03 07 a1 20 83 d1 00 ff 2f 00 4d 54 72 6b 00 00 00 xx 00 ff 03 05 50 60 61 6e 6f 00 ff 04 0b 47 4d 20 50 6c 61 70 65 72 20 31 00 c0 00 00 b0 0a 40 00 b0 07 60 00 90 3c 50 83 60 80 3c 40 00 90 48 50 78 80 48 40.
Example 2
Fig. 10 is a flowchart of a method for editing music of an interactive intelligent device according to embodiment 2 of the present invention. This example was embodied on the basis of example 1 described above.
Step S101, displaying a display area of the music editing application.
In step S103, a first performance parameter for representing a performance tone color and a second performance parameter for representing a tempo of music are acquired.
Specifically, the performance tone is the musical instrument tone used when playing the generated music file, the music is the currently generated music, and the beat of the music is an essential element for music editing.
In an alternative embodiment, as shown in fig. 2, in the selection menu above the blank area of the editing application interface, the middle selection menu is a tone menu, after the corresponding control is triggered to display the drop-down menu, the required tone can be selected, the tone allowed to be selected is the tone of the pre-stored and device, and under the condition that the user does not select the tone, the acquired first performance parameter is the default tone, and the default tone can be the tone of the piano.
Still in the example of fig. 2, the third menu above the blank area is a "tempo" menu for indicating the above-mentioned second performance parameter, and in the case where the tempo is not selected by the user, the acquired second performance parameter is a default tempo, which may be 4/4 th beat.
Step S105, at least one line of staff is displayed in the display area according to the triggered staff control.
Step S107, detecting the selected music control, wherein the music control is used for characterizing at least one music symbol applied to the staff.
Step S109, the position of the music symbol represented by the music control in the staff is obtained.
Step S1011, generating a composing file according to the music symbol and the position of the music symbol in the staff.
In step S1013, the play control is detected to be triggered.
Specifically, the play control is used for playing the existing music score in the music composing application interface. Still referring to fig. 2, the above-mentioned blank area further includes three controls, where the control named "play" on the rightmost side is the above-mentioned play control.
The time for triggering the playing control by the user can be after the generation of the music composing file or before the generation of the music composing file, so that the music composing application can play the finally generated music composing file and can play the existing content in the music composing process, thereby helping the user to listen to the existing content in trial.
Step S1015, corresponding music is played according to the music symbol and the position.
In an alternative embodiment, the user triggers the play control after generating the music composing file, and the music composing application plays the corresponding music according to the music composing file.
In an alternative embodiment, the user triggers the playing space before the composition file is generated, and the composition application plays the corresponding content according to the score for which the composition area (i.e. the blank area of fig. 2) already exists.
Example 3
According to an embodiment of the present invention, there is provided an embodiment of a music composing device of an interactive smart device, and fig. 11 is a schematic diagram of a music composing device of an interactive smart device according to embodiment 3 of the present invention, and in combination with fig. 11, the device includes:
the first display module 110 is configured to display a display area of the composition application.
Specifically, the music composing application may be a single application software installed in the interactive intelligent device, or may be a functional application integrated on a whiteboard application or other applications. The music composing application is used for the user to freely compose music on the interactive intelligent equipment and record music files obtained by composing the music.
In an alternative embodiment, the user initiates the interactive smart device's whiteboard application and initiates the composition control in the whiteboard application, thereby initiating the composition application.
A second display module 112 is configured to display at least one line of staff in the presentation area according to the triggered staff control.
Specifically, the staff control is a control set in the composing application interface and is used for generating the staff. The position of the staff control set in the composition application interface is not specifically limited in this application.
In an alternative embodiment, as shown in connection with FIG. 1, a blank area is used to display a user edited staff. The left side of the interface comprises three controls, wherein the lowest control named as a staff control is a staff control, a user clicks the staff control, a blank area displays a line of staff, or the user drags the staff control to the blank area, and the blank area displays a line of staff.
A detection module 114 for detecting a selected music control, wherein the music control is used to characterize a musical symbol in at least one staff.
An acquisition module 116 is configured to acquire a location of the music symbol represented by the music control in the staff.
After the user selects the music control, the music symbol corresponding to the music control is required to be placed in the staff to endow the music symbol with practical significance.
Still taking fig. 2 as an example, the user clicks the "note" control to display a floating window in the composition application interface for displaying various types of notes, such as: whole notes, half notes, octaves, etc. The user drags one of the notes in the floating window into the staff, places it in the appropriate position, and releases the selected note by releasing the mouse, i.e., places it in one of the staff.
The generating module 118 is configured to generate a composition file according to the music symbol and the position of the music symbol in the staff.
Specifically, the aforementioned clip file may be a Midi format file. The Midi format belongs to a binary file and consists of a plurality of sub-data with the same format. The Midi format file does not sample the music, but only generates a corresponding set of data for each note of the music.
As can be seen from the above, the above embodiments of the present application provide a display area for displaying a music editing application by the first display module; displaying at least one line of staff in a display area according to the triggered staff control by an acquisition module; detecting the selected music control through a detection module, wherein the music control is used for representing at least one music symbol applied to the staff; acquiring the position of a music symbol represented by a music control in a staff; a composition file is generated by a generation module based on the musical notation and the location of the musical notation in the staff. The scheme provides the music editing application capable of freely editing music on the staff on the interactive intelligent device for the user, and generates the music editing result into the music editing file, so that the technical problem that the interactive intelligent device cannot edit music in the prior art is solved.
As an alternative embodiment, according to the triggered staff control, the display module comprises:
and the first detection submodule is used for detecting a music composing mode, wherein the music composing mode comprises a single-sound-part mode and a multi-sound-part mode.
Specifically, the single-sound part mode is used for generating a single music composition without accompaniment, and the multi-sound part mode refers to the generated music including other accompaniment in addition to the main melody.
The mono mode and the multi-mono mode may be selected by the user or may default to mono mode if not selected by the user.
In an alternative embodiment, still referring to fig. 2, a plurality of selectable menus are included above the blank area, where the modes are the "music mode" described above, and include a single hand and two hands, where the single hand corresponds to the single-tone portion mode and the two hands represent the multiple-tone portion mode.
The first generation sub-module is used for generating a line staff spectrum table under the condition that the music composing mode is a single-sound-part mode.
If the composing mode is a mono mode, only one line of staff may be displayed, and the position where the line of staff is displayed in the blank area may be a preset position. When the staff of the row is full of notes (e.g., the number of notes displayed for the row reaches a preset number), or when the staff control is triggered again by the user, a new row of staff is generated.
And the second generation submodule is used for generating a plurality of lines of staff in the case that the music composing mode is a multi-vocal part mode, wherein the lines of staff at least comprise a high-pitch staff and a low-pitch staff.
Specifically, the number of lines that generate the staff is the same as the number of vocal parts. For example: the user selects the binaural mode, the binaural comprises a high-sound part and a low-sound part, the staff of different sound parts is provided with corresponding sound part identifications, namely a high-sound staff and a low-sound staff, and the user can respectively write the main melody and the accompaniment melody in the two-line staff.
As an alternative embodiment, the music control at least includes: a first music control for representing notes and a second music control for representing rest.
In addition to the music symbols for the staff in both notes and rest, a variety of other music symbols may be included, such as: a change note, a delay note, an accent, etc.
In an alternative embodiment, as shown in connection with FIG. 2, the first "note control" and the second "rest" control on the left are both music symbols as described above. In the interface schematic, only two music symbols of notes and rest are shown, and optionally, the left control bar can include a scroll bar, and other music symbols can be displayed for the user to scroll down.
As an optional embodiment, the music control is selected to generate a corresponding music symbol, and the position of the music symbol in the staff is changed by dragging the music symbol, and the obtaining module includes:
and the second detection sub-module is used for detecting the end position of the music symbol in the staff when the music symbol is released.
In particular, the release of the musical symbol may be a mouse that clicks on the musical symbol.
Fig. 3 is a schematic illustration of adding musical symbols in a staff table according to embodiment 1 of the present application. In an alternative embodiment, taking this quarter note on the right side as an example, as shown in fig. 3, the user selects the quarter note and drags the quarter note between the three-wire and the four-wire of the staff, and after the quarter note is placed between the three-wire and the four-wire, the mouse for dragging the quarter note is released, so that the position of the quarter note can be determined as shown in fig. 3.
It should be noted that, the direction of the note stems can be automatically adjusted according to a predetermined stem rule. For example: if the direction rule of the stem is that the three lines are more downward and the three lines are more upward, when the note is dragged more than the three lines according to the operation of a user, the stem is downward, and when the note is dragged less than the three lines, the stem direction is automatically changed to be upward without manual adjustment.
And the determining submodule is used for determining the end position as the position of the music symbol in the staff.
The staff is provided with a designated position, i.e. on the line of the staff, or between the lines of the staff. If the position of the note released by the user is not exactly at the designated position, the position of the note is finely adjusted to the designated position nearest to the designated position according to the distance between the released position and the designated position.
It should be noted that after determining the position of the music symbol, the position of the music symbol may not be locked, that is, the music symbol already on the staff may still be continuously selected for dragging, and the position of the music symbol in the staff may be adjusted.
As an alternative embodiment, the second detection sub-module comprises:
and the first display unit is used for displaying the n-line when the music symbol is positioned on the upper side of the staff and the distance between the music symbol and the fifth line of the staff is in a first preset range during the dragging process of the music symbol.
Specifically, n may be an integer greater than 0 and less than 4, and when the music control is located on the upper side of the staff and the distance from the fifth line of the staff is within the first predetermined range, it is indicated that the tone represented by the music symbol may be a tone represented by the staff exceeding the fifth line of the staff, so that n lines need to be added to be displayed.
The method of adding n lines on display can be to directly display adding one line to adding three lines, or judge according to the position of the music symbol, if the position of the music symbol is in adding one line, then display adding one line, if the position of the music symbol is in adding two lines, then display adding one line and adding two lines, and so on.
And a second display unit for displaying the down-added m line if the music symbol is at the lower side of the staff and the distance from the first line of the staff is within a second predetermined range during the process of dragging the music symbol.
Specifically, m may be an integer greater than 0 and less than 4, and when the music control is located on the upper side of the staff and the distance from the fifth line of the staff is within the second predetermined range, it is indicated that the tone represented by the music symbol may be a tone represented by a line lower than the staff, so that the m-line needs to be added down.
The method of displaying the next m line is similar to the method of displaying the next n line, and can be used for directly displaying the next m line to the next n line, judging according to the position of the music symbol, displaying the next m line if the position of the music symbol is the next m line, displaying the next m line and the next m line if the position of the music symbol is the next m line, and the like.
As an alternative embodiment, the colors of the upper and lower m-lines are different from the color of the staff, and after releasing the music symbol from the upper n-line or the lower m-line, the color of the upper n-line or the lower m-line where the music symbol is located is the same as the staff.
In the above scheme, when the music symbol is not released, if the added n line or the added m line is displayed, the music symbol may be placed on the added n line or the added m line, or may not be placed on the added n line or the added m line, so that the added n line or the added m line is displayed in a manner different from the staff table to indicate that the added n line or the added m line may possibly disappear, when the music symbol is released, if the added n line or the added m line is also placed on the added n line or the added m line, the position of the music symbol is determined, and the added n line or the added m line must be a part of the staff table, so that the color of the added n line or the added m line where the music symbol is located is the same as the staff table.
As an alternative embodiment, the above device further comprises:
and the parameter acquisition module is used for acquiring a first performance parameter and a second performance parameter after the display area of the music editing application is displayed, wherein the first performance parameter is used for representing the performance tone color, and the second performance parameter is used for representing the performance beat.
As an alternative embodiment, the above device further comprises:
and the detection module is used for detecting that the play control is triggered after determining the position of the music symbol represented by the music control in the staff.
And the playing module is used for playing corresponding music according to the music symbol and the position.
As an alternative embodiment, the music file is a Midi file, and the first generating sub-module includes:
a first recording sub-module for recording the type of music symbol and the location of the music symbol in the staff.
Specifically, the recording may be used for recording the behavior of placing the music symbol on the music playing interface without generating an intermediate file.
By using the musical notation as the musical note, the position of the musical note in the staff is used for representing the tone of the musical note, the type of the musical note is used for representing the musical note time value, and the type of the musical notation and the position of the musical notation in the staff are recorded, so that at least one pronunciation can be determined.
Still referring to fig. 3 and 4, fig. 3 further includes a full note, and fig. 4 is a recorded result of the full note in fig. 3. The recorded results include two parameters, note and stick, where note is used to represent the tone and stick is used to represent the note duration. The pitch and note duration values have a correspondence between the value and the actual value, the pitch of the whole note is the center C, the corresponding note value is 60, and the note duration of the whole note is four times the base time (the time corresponding to a quarter note is the base time, in this example, the base time is 120) 480. Thus, the note for the full note in FIG. 3 is note:60, tics: 480.
Referring to fig. 3 and 5, fig. 5 shows the recorded quarter note of fig. 3. The recorded results still include both note and ticks parameters. The tone of the quarter note is treble C, the note value corresponding to treble C is 72, and the note duration is the base time 120, so the note for the quarter note in fig. 3 is recorded as note:72, tics: 120.
and the generation sub-module is used for generating a composing file according to the recording result.
In the above steps, the recording result includes the tone of the music symbol, the note duration and the position in the composition (the position of the music symbol in the score can be determined by pointing to the parameters), so that the recording result needs to be converted into the format to which the composition file belongs.
As an alternative embodiment, the recording sub-module further comprises:
a second recording sub-module for recording pointing parameters for representing the pointing direction of the music symbols for pointing from the music symbols to the preceding and following items of the music symbols while recording the type of the music symbols and the position of the music symbols in the staff.
The orientation of the musical notation is used to determine the position of the musical notation in the score, the preceding musical notation being oriented to the preceding musical notation adjacent to the musical notation, the following musical notation being oriented to the next musical notation adjacent to the musical notation, i.e. the preceding musical notation being the preceding musical notation of the following musical notation and the following musical notation being the following musical notation of the preceding musical notation, i.e. the two adjacent musical notation.
FIG. 6 is a schematic diagram of note pointing according to an embodiment of the present application. Taking the two notes of fig. 3 as an example, two adjacent notes are included in fig. 3, namely a full note on the left and a quarter note on the right, that is, the full note is the leading note of the quarter note, and the quarter note is the trailing note of the full note, so that for the full note, in the case that it is the first note, the leading note (left) does not point to any note, and the trailing note (right) points to the quarter note; the leading term (left) of a quarter note points to the full note, and the trailing term of a quarter note points to the next note.
It should be noted that the first music symbol in the staff is not preceded by other music symbols, so its antecedent points to null, i.e. left points to null. If two music symbols are discontinuous and the space between the two music symbols is empty, an empty block can be set at the empty position, the note value of the block is set to be-1, the right of the preceding music symbol points to the block, and the left of the following music symbol points to the empty block, so that the position and the sequence of each music symbol in the composition can be ensured.
By the pointing parameter carried in the music symbol, the last music symbol and the next music symbol of the current music symbol can be determined, so that the position of the music symbol in the whole composition can be determined, and the sequence of each music symbol in the composition can be determined.
As an alternative embodiment, the generating sub-module comprises:
and the determining unit is used for determining the Midi file header of the composing file according to the Midi format of the music, the number of audio tracks of the music and the basic time format type on the basis of the Midi file header format.
In particular, the parameters required in the Midi file header include the number of tracks of the music and a preset base time. Fig. 7 is a schematic diagram of a Midi header structure in the related art, and in connection with fig. 7, "4d 54 68 64" is used to indicate that the part of content is a header, and each Midi file is "4d 54 68 64 00 00 00 06 ff ff nn nn dd dd" in code, where, in connection with fig. 7, a first portion 71 "ff" is used to indicate a Midi format, for example: "00" means single track, "00" means multi-track and synchronous, "00" means multi-track and unsynchronized; the second portion 72, "nn", represents the number of tracks, including the actual number of tracks plus one global track; the third section 73"dd dd" is used to represent the basic time format type including two types, the first type being the number of ticks defining one quarter note, the second type being the number of SMTPE (The Society of Motion Picture and Television Engineers, a time code concept) frames per second and the ticks per SMTPE frame.
The Midi format of the music, the number of tracks of the music, and the basic time format type may be acquired according to a user's setting, and in the case where the user does not set the first, the Midi format of the music, the number of tracks of the music, and the basic time format type may be acquired according to a default value of an application.
And the generation unit is used for generating a global track, a current track and a Midi event sequence of the composing file on the basis of the Midi track block format, wherein the global track is used for representing global information of music, the current track is used for representing performance information of each track, and the Midi event sequence is used for representing continuous events in the music.
Specifically, the track block includes a global track, a current track and a Midi event sequence, and the global information of the music may include additional information of the music (for example, copyright information, etc.), song speed or system code, etc.; the performance information of the current track may include a performance instrument, a musical instrument tone, etc. of the current track; the Midi event sequence represents musical events in music, i.e. the content in the above recorded results.
Fig. 8 is a schematic diagram of an audio track block structure according to the related art, and as shown in conjunction with fig. 8, ""4d 54 68 b "is used to indicate that the next byte is an audio track block, a first portion 81" nn nn nn "is used to indicate the length of the data portion, and a second portion 82" data "is used to indicate audio track data information, including Meta event and midi event.
And the connection unit is used for sequentially connecting the Midi file header, the global audio track, the current audio track and the Midi event sequence to obtain the composing file.
And sequentially connecting the obtained data of the three parts of contents to obtain the edit file in the Midi file format.
As an alternative embodiment, the generating unit comprises:
and the mapping subunit is used for mapping Midi events of the music symbols according to the note time values and the tone of the music symbols.
Specifically, the duration and the tone of the music symbol may be obtained from the recording result, where the Midi event includes two parts, i.e. a time interval and an event (event), where the time interval is the duration of the current note, and the event may include: note pitch, note pressed, note released, force pressed, velocity released, etc., where note pitch may be obtained from the recorded results and other events may be obtained from other musical symbols (e.g., enhancement notes, accent notes, etc.).
And the traversing subunit is used for connecting the Midi events of each music symbol by traversing the Midi events of each music symbol according to the pointing parameter to obtain a Midi event sequence of the composing file.
Specifically, each music symbol corresponds to a Midi event, and because the leading and trailing items of the music symbol can be determined in the pointing parameter, all Midi events of the music symbol can be traversed in sequence according to the pointing parameter, and the Midi events of the music symbol are connected in sequence to obtain the current audio track of the composing file.
Example 4
According to an embodiment of the present invention, there is provided a storage medium including a stored program, wherein when the program is executed, a device in which the storage medium is controlled to execute the steps of: displaying a display area of the music editing application; displaying at least one line of staff in the display area according to the triggered staff control; detecting a selected music control, wherein the music control is used for representing at least one music symbol applied to the staff; acquiring the position of a music symbol represented by a music control in a staff; a composition file is generated based on the musical notation and the location of the musical notation in the staff.
Example 5
According to an embodiment of the present invention, there is provided an intelligent interactive tablet, including a memory and a processor, the memory being configured to store information including program instructions, the processor being configured to control execution of the program instructions, the program instructions when loaded and executed by the processor implementing the steps of: displaying a display area of the music editing application; displaying at least one line of staff in the display area according to the triggered staff control; detecting a selected music control, wherein the music control is used for representing at least one music symbol applied to the staff; acquiring the position of a music symbol represented by a music control in a staff; a composition file is generated based on the musical notation and the location of the musical notation in the staff.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present invention, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology content may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, for example, may be a logic function division, and may be implemented in another manner, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.

Claims (13)

1. A music composing method of an interactive intelligent device is characterized by comprising the following steps:
displaying a display area of the music editing application;
displaying at least one line of staff in the display area according to the triggered staff control;
detecting a selected music control, wherein the music control is used for representing at least one music symbol applied to a staff;
acquiring the position of a music symbol represented by the music control in the staff;
generating a composing file according to the music symbol and the position of the music symbol in the staff;
generating a corresponding music symbol after the music control is selected, changing the position of the music symbol in the staff by dragging the music symbol, and acquiring the position of the music symbol represented by the music control in the staff, wherein the method comprises the following steps:
detecting the end position of the music symbol in the staff when the music symbol is released;
Determining the end position as the position of the music symbol in the staff;
detecting an end position of the musical symbol in the staff when the musical symbol is released, comprising:
in the process that the music symbol is dragged, if the music symbol is positioned on the upper side of the staff and the distance between the music symbol and the fifth line of the staff is within a first preset range, adding n lines;
and displaying a next m line if the music symbol is positioned at the lower side of the staff and the distance from the first line of the staff is in a second preset range in the process of dragging the music symbol.
2. The method of claim 1, wherein displaying at least one line of staff in the presentation area in accordance with the triggered staff control comprises:
detecting a music composing mode, wherein the music composing mode comprises a single-sound-part mode and a multi-sound-part mode;
generating a staff table under the condition that the music composing mode is the single-sound part mode;
and under the condition that the composing mode is the multi-vocal part mode, generating a plurality of lines of staff, wherein the lines of staff at least comprise a high-pitch staff and a low-pitch staff.
3. The method according to claim 1, wherein the music control comprises at least: a first music control for representing notes and a second music control for representing rest.
4. The method of claim 1, wherein the colors of the upper n-line and the lower m-line are different from the color of the staff before the music symbol is released, and wherein the color of the upper n-line or the lower m-line where the music symbol is located is the same as the staff after the music symbol is released on the upper n-line or the lower m-line.
5. The method of claim 1, wherein after displaying the presentation area of the composition application, the method further comprises:
a first performance parameter for representing a performance tone and a second performance parameter for representing a performance tempo are acquired.
6. The method of claim 1, wherein after determining the location of the musical symbol characterized by the music control in the staff, the method further comprises:
detecting that a play control is triggered;
And playing corresponding music according to the music symbol and the position.
7. The method of claim 1, wherein the composition file is a Midi file, and generating the composition file based on the musical notation and the location of the musical notation in the staff table comprises:
recording the type of the music symbol and the position of the music symbol in the staff;
and generating the composing file according to the recording result.
8. The method of claim 7, wherein while recording the type of musical symbol and the location of the musical symbol in the staff, the method further comprises:
recording a pointing parameter for indicating a pointing direction of the music symbol, wherein the pointing direction of the music symbol is used for pointing from the music symbol to a front item and a rear item of the music symbol.
9. The method of claim 8, wherein generating the composition file based on the recording result comprises:
on the basis of the Midi file header format, determining the Midi file header of the music editing file according to the Midi format of the music, the number of audio tracks of the music and the basic time format type;
Generating a global track, a current track and a Midi event sequence of the music composing file on the basis of a Midi track block format, wherein the global track is used for representing global information of music, the current track is used for representing performance information of each track, and the Midi event sequence is used for representing continuous events in the music;
and sequentially connecting the Midi file header, the global audio track, the current audio track and the Midi event sequence to obtain the composing file.
10. The method of claim 9, wherein generating the Midi event sequence of the music file based on the recording result on the Midi track block format comprises:
mapping Midi events of the musical symbol according to note time values and tones of the musical symbol;
and according to the pointing parameters, connecting the Midi events of each music symbol by traversing the Midi events of each music symbol to obtain the Midi event sequence of the composing file.
11. The utility model provides a flexible device of mutual smart machine which characterized in that includes:
the first display module is used for displaying a display area of the music editing application;
the second display module is used for displaying at least one line of staff in the display area according to the triggered staff control;
The detection module is used for detecting the selected music control, wherein the music control is used for representing music symbols in at least one staff;
the acquisition module is used for acquiring the position of the music symbol represented by the music control in the staff;
the generation module is used for generating a composing file according to the music symbol and the position of the music symbol in the staff;
the apparatus is also for performing the steps of:
detecting the end position of the music symbol in the staff when the music symbol is released;
determining the end position as the position of the music symbol in the staff;
in the process that the music symbol is dragged, if the music symbol is positioned on the upper side of the staff and the distance between the music symbol and the fifth line of the staff is within a first preset range, adding n lines;
and displaying a next m line if the music symbol is positioned at the lower side of the staff and the distance from the first line of the staff is in a second preset range in the process of dragging the music symbol.
12. A computer-readable storage medium, characterized in that the computer-readable storage medium comprises a stored program, wherein the program, when run, controls a device in which the computer-readable storage medium resides to perform the steps of: displaying a display area of the music editing application; displaying at least one line of staff in the display area according to the triggered staff control; detecting a selected music control, wherein the music control is used for representing at least one music symbol applied to a staff; acquiring the position of a music symbol represented by the music control in the staff; generating a composing file according to the music symbol and the position of the music symbol in the staff;
Detecting the end position of the music symbol in the staff when the music symbol is released;
determining the end position as the position of the music symbol in the staff;
in the process that the music symbol is dragged, if the music symbol is positioned on the upper side of the staff and the distance between the music symbol and the fifth line of the staff is within a first preset range, adding n lines;
and displaying a next m line if the music symbol is positioned at the lower side of the staff and the distance from the first line of the staff is in a second preset range in the process of dragging the music symbol.
13. An intelligent interactive tablet comprising a memory for storing information including program instructions and a processor for controlling execution of the program instructions, characterized in that: the program instructions when loaded and executed by a processor implement the steps of: displaying a display area of the music editing application; displaying at least one line of staff in the display area according to the triggered staff control; detecting a selected music control, wherein the music control is used for representing at least one music symbol applied to a staff; acquiring the position of a music symbol represented by the music control in the staff; generating a composing file according to the music symbol and the position of the music symbol in the staff;
Detecting the end position of the music symbol in the staff when the music symbol is released;
determining the end position as the position of the music symbol in the staff;
in the process that the music symbol is dragged, if the music symbol is positioned on the upper side of the staff and the distance between the music symbol and the fifth line of the staff is within a first preset range, adding n lines;
and displaying a next m line if the music symbol is positioned at the lower side of the staff and the distance from the first line of the staff is in a second preset range in the process of dragging the music symbol.
CN201810871375.3A 2018-08-02 2018-08-02 Interactive intelligent device and music editing method and device thereof Active CN108847207B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810871375.3A CN108847207B (en) 2018-08-02 2018-08-02 Interactive intelligent device and music editing method and device thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810871375.3A CN108847207B (en) 2018-08-02 2018-08-02 Interactive intelligent device and music editing method and device thereof

Publications (2)

Publication Number Publication Date
CN108847207A CN108847207A (en) 2018-11-20
CN108847207B true CN108847207B (en) 2023-06-02

Family

ID=64195254

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810871375.3A Active CN108847207B (en) 2018-08-02 2018-08-02 Interactive intelligent device and music editing method and device thereof

Country Status (1)

Country Link
CN (1) CN108847207B (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3580210B2 (en) * 2000-02-21 2004-10-20 ヤマハ株式会社 Mobile phone with composition function
CN1453716A (en) * 2003-05-30 2003-11-05 蔡键龙 Portable multimedia music-writing system based on MIDI file
US7893338B2 (en) * 2004-07-15 2011-02-22 Creative Technology Ltd Method of composing music on a handheld device
CN104658529B (en) * 2015-02-11 2019-01-11 牟晓勇 A kind of electronic music synthetic method and device
CN106652984B (en) * 2016-10-11 2020-06-02 张文铂 Method for automatically composing songs by using computer
CN207529548U (en) * 2017-10-12 2018-06-22 华中师范大学 staff input hardware keyboard

Also Published As

Publication number Publication date
CN108847207A (en) 2018-11-20

Similar Documents

Publication Publication Date Title
US11314936B2 (en) System and method for assembling a recorded composition
US8907195B1 (en) Method and apparatus for musical training
JP5007563B2 (en) Music editing apparatus and method, and program
JP3823928B2 (en) Score data display device and program
WO1998011529A1 (en) Automatic musical composition method
JP6465136B2 (en) Electronic musical instrument, method, and program
AU2009295348A1 (en) Video and audio content system
US9601029B2 (en) Method of presenting a piece of music to a user of an electronic device
JP4949899B2 (en) Pitch display control device
JP5625321B2 (en) Speech synthesis apparatus and program
CN108847207B (en) Interactive intelligent device and music editing method and device thereof
JP3807380B2 (en) Score data editing device, score data display device, and program
JP4456088B2 (en) Score data display device and program
JP3998461B2 (en) Performance practice device, performance practice method, program and recording medium
JP4720974B2 (en) Audio generator and computer program therefor
JP3974069B2 (en) Karaoke performance method and karaoke system for processing choral songs and choral songs
JP6790362B2 (en) Electronic acoustic device
JP4853054B2 (en) Performance data editing apparatus and program
JP5790860B2 (en) Speech synthesizer
JP5663953B2 (en) Music generator
JP3620423B2 (en) Music information input editing device
JP7219559B2 (en) Musical instrument performance practice device and musical instrument performance practice program
JP4161714B2 (en) Karaoke equipment
JP4356509B2 (en) Performance control data editing apparatus and program
JPH0764545A (en) Musical composition device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant